this post was submitted on 08 Jun 2025
677 points (95.6% liked)

Technology

71078 readers
4662 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] brsrklf@jlai.lu 45 points 1 day ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 37 points 1 day ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[–] technocrit@lemmy.dbzer0.com 15 points 22 hours ago* (last edited 22 hours ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[–] SplashJackson@lemmy.ca 23 points 22 hours ago (1 children)
load more comments (1 replies)
[–] sev@nullterra.org 50 points 1 day ago (27 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

[–] kescusay@lemmy.world 17 points 1 day ago (2 children)

I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

But I don't think we're anywhere near there yet.

[–] homura1650@lemm.ee 2 points 15 hours ago (1 children)

LLMs (at least in their current form) are proper neural networks.

load more comments (1 replies)
load more comments (1 replies)
load more comments (26 replies)
[–] technocrit@lemmy.dbzer0.com 24 points 23 hours ago* (last edited 22 hours ago) (6 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] yeahiknow3@lemmings.world 22 points 22 hours ago* (last edited 22 hours ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 21 hours ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

load more comments (1 replies)
load more comments (4 replies)
[–] surph_ninja@lemmy.world 8 points 19 hours ago (9 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

load more comments (9 replies)
[–] LonstedBrowryBased@lemm.ee 13 points 21 hours ago (13 children)

Yah of course they do they’re computers

[–] intensely_human@lemm.ee 1 points 13 hours ago

Computers are better at logic than brains are. We emulate logic; they do it natively.

It just so happens there's no logical algorithm for "reasoning" a problem through.

load more comments (12 replies)
[–] reksas@sopuli.xyz 35 points 1 day ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 33 points 1 day ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 17 points 1 day ago* (last edited 1 day ago) (2 children)

This has been known for years, this is the default assumption of how these models work.

You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

load more comments (2 replies)
[–] atlien51@lemm.ee 14 points 1 day ago (2 children)

Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

🫢

load more comments (2 replies)
[–] flandish@lemmy.world 19 points 1 day ago

stochastic parrots. all of them. just upgraded “soundex” models.

this should be no surprise, of course!

[–] mfed1122@discuss.tchncs.de 12 points 1 day ago* (last edited 1 day ago) (14 children)

This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

load more comments (14 replies)
load more comments
view more: ‹ prev next ›