this post was submitted on 08 Jun 2025
622 points (95.9% liked)

Technology

71078 readers
3467 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] ZILtoid1991@lemmy.world 13 points 14 hours ago (3 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
[–] vala@lemmy.world 26 points 16 hours ago
[–] bjoern_tantau@swg-empire.de 36 points 18 hours ago
[–] Nanook@lemm.ee 205 points 22 hours ago (42 children)

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[–] MNByChoice@midwest.social 67 points 22 hours ago (1 children)

The "Apple" part. CEOs only care what companies say.

[–] kadup@lemmy.world 40 points 20 hours ago (5 children)

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[–] homesweethomeMrL@lemmy.world 20 points 18 hours ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (4 replies)
load more comments (41 replies)
[–] BlaueHeiligenBlume@feddit.org 9 points 14 hours ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

[–] Endmaker@ani.social 1 points 9 hours ago

I still remember Geoff Hinton's criticisms of backpropagation.

IMO it is still remarkable what NNs managed to achieve: some form of emergent intelligence.

[–] Jhex@lemmy.world 49 points 19 hours ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] SplashJackson@lemmy.ca 23 points 19 hours ago (1 children)
load more comments (1 replies)
[–] brsrklf@jlai.lu 44 points 21 hours ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 36 points 21 hours ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[–] technocrit@lemmy.dbzer0.com 16 points 19 hours ago* (last edited 19 hours ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[–] surph_ninja@lemmy.world 8 points 16 hours ago (3 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

[–] petrol_sniff_king@lemmy.blahaj.zone 19 points 15 hours ago (6 children)

Maybe you failed all your high school classes, but that ain't got none to do with me.

load more comments (6 replies)
[–] LemmyIsReddit2Point0@lemmy.world 13 points 15 hours ago

We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

load more comments (1 replies)
[–] sev@nullterra.org 47 points 22 hours ago (28 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

load more comments (28 replies)
[–] technocrit@lemmy.dbzer0.com 23 points 20 hours ago* (last edited 19 hours ago) (4 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] tauonite@lemmy.world 15 points 14 hours ago

That's called science

[–] TheRealKuni@midwest.social 31 points 17 hours ago (2 children)

Why would they "prove" something that's completely obvious?

I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

load more comments (2 replies)
[–] yeahiknow3@lemmings.world 22 points 19 hours ago* (last edited 19 hours ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 18 hours ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

[–] limelight79@lemmy.world 4 points 11 hours ago* (last edited 11 hours ago)

Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

[–] LonstedBrowryBased@lemm.ee 13 points 18 hours ago (2 children)

Yah of course they do they’re computers

[–] finitebanjo@lemmy.world 21 points 17 hours ago (3 children)

That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

[–] EncryptKeeper@lemmy.world 16 points 16 hours ago (2 children)

TBH idk how people can convince themselves otherwise.

They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

[–] leftzero@lemmynsfw.com 1 points 7 hours ago

LLMs are also very good at convincing their users that they know what they are saying.

It's what they're really selected for. Looking accurate sells more than being accurate.

I wouldn't be surprised if many of the people selling LLMs as AI have drunk their own kool-aid (of course most just care about the line going up, but still).

[–] Blackmist@feddit.uk 5 points 14 hours ago (1 children)

It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

load more comments (1 replies)
[–] turmacar@lemmy.world 13 points 17 hours ago* (last edited 17 hours ago) (4 children)

I think because it's language.

There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

[–] finitebanjo@lemmy.world 10 points 16 hours ago

I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

load more comments (3 replies)
[–] intensely_human@lemm.ee 1 points 10 hours ago (1 children)

They aren't bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

load more comments (1 replies)
[–] intensely_human@lemm.ee 1 points 10 hours ago

Computers are better at logic than brains are. We emulate logic; they do it natively.

It just so happens there's no logical algorithm for "reasoning" a problem through.

load more comments
view more: ‹ prev next ›