this post was submitted on 27 Jan 2025
883 points (98.1% liked)

Technology

61206 readers
4373 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemm.ee/post/53805638

you are viewing a single comment's thread
view the rest of the comments
[–] fuck_u_spez_in_particular@lemmy.world 3 points 2 days ago (1 children)

confidently so in the face of overwhelming evidence

That I'd really like to see. And I mean more than the marketing bullshit that AI companies are doing...

For the record I was one of the first jumping on the AI hype-train (as programmer, and computer-scientist with machine-learning background), following the development of GPT1-4, being excited about having to do less boilerplaty code etc. getting help about rough ideas etc. GPT4 was almost so far as being a help (similar with o1 etc. or Anthropics models). Though I seldom use AI currently (and I'm observing similar with other colleagues and people I know of) because it actually slows me down with my stuff or gives wrong ideas, having to argue, just to see it yet again saturating at a local-minimum (aka it doesn't get better, no matter what input I try). Just so that I have to do it myself... (which I should've done in the first place...).

Same is true for the image-generative side (i.e. first with GANs now with diffusion-based models).

I can get into more details about transformer/attention-based-models and its current plateau phase (i.e. more hardware doesn't actually make things significantly better, it gets exponentially more expensive to make things slightly better) if you really want...

I hope that we do a breakthrough of course, that a model actually really learns reasoning, but I fear that that will take time, and it might even mean that we need different type of hardware.

[–] surph_ninja@lemmy.world 0 points 2 days ago (1 children)

Any other AI company, and most of that would be legitimate criticism of the overhype used to generate more funding. But how does any of that apply to DeepSeek, and the code & paper they released?

[–] fuck_u_spez_in_particular@lemmy.world 1 points 1 day ago (1 children)

DeepSeek

Yeah it'll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I'm slightly cautious non-the less. It's not doing something significantly different (i.e. it's still an LLM), it's just a lot cheaper/efficient to train, and open for everyone (which is great).

[–] surph_ninja@lemmy.world 1 points 1 day ago (1 children)

What’s this “if” nonsense? I loaded up a light model of it, and already have put it to work.

[–] fuck_u_spez_in_particular@lemmy.world 0 points 1 day ago (1 children)

Have you actually read my text wall?

Even o1 (which AFAIK is roughly on par with R1-671B) wasn't really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren't just capable to deliver this.

I still need to try it out whether it's possible to train it on my/our codebase, such that it's at least possible to use as something like Github copilot (which I also don't use, because it just isn't reliable enough, and too often generates bugs). Also I'm a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.

[–] surph_ninja@lemmy.world 0 points 1 day ago (1 children)

Ahh. It’s overconfident neckbeard stuff then.

[–] fuck_u_spez_in_particular@lemmy.world 0 points 1 day ago (1 children)

You're just trolling aren't you? Have you used AI for a longer time while coding and then tried without for some time? I currently don't miss it... Keep in mind that you still have to check whether all the code is correct etc. writing code isn't the thing that usually takes that much time for me... It's debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you're not that experienced).

[–] surph_ninja@lemmy.world 1 points 1 day ago (1 children)

Yes, I have tested that use case multiple times. It performs well enough.

A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.

[–] fuck_u_spez_in_particular@lemmy.world 0 points 1 day ago (1 children)

As you're being unkind all the time, let me be unkind as well :)

A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.

If you can effectively use AI for your problems, maybe they're too repetitive, and actually just dumb boilerplate.

I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don't even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.

I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn't even really understand what allocations are... Arghh I could go on...

[–] surph_ninja@lemmy.world 1 points 1 day ago

I think you’re severely underestimating the amount of white collar work that is just boiler plate, and understating how well AI can quickly produce a workable first draft. Or maybe you just aren’t writing good prompts.