this post was submitted on 08 Jun 2025
828 points (95.4% liked)

Technology

71309 readers
4626 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] communist@lemmy.frozeninferno.xyz 11 points 4 days ago* (last edited 4 days ago) (16 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

[–] MouldyCat@feddit.uk 3 points 3 days ago

In case you haven't seen it, the paper is here - https://machinelearning.apple.com/research/illusion-of-thinking (PDF linked on the left).

The puzzles the researchers have chosen are spatial and logical reasoning puzzles - so certainly not the natural domain of LLMs. The paper doesn't unfortunately give a clear definition of reasoning, I think I might surmise it as "analysing a scenario and extracting rules that allow you to achieve a desired outcome".

They also don't provide the prompts they use - not even for the cases where they say they provide the algorithm in the prompt, which makes that aspect less convincing to me.

What I did find noteworthy was how the models were able to provide around 100 steps correctly for larger Tower of Hanoi problems, but only 4 or 5 correct steps for larger River Crossing problems. I think the River Crossing problem is like the one where you have a boatman who wants to get a fox, a chicken and a bag of rice across a river, but can only take two in his boat at one time? In any case, the researchers suggest that this could be because there will be plenty of examples of Towers of Hanoi with larger numbers of disks, while not so many examples of the River Crossing with a lot more than the typical number of items being ferried across. This being more evidence that the LLMs (and LRMs) are merely recalling examples they've seen, rather than genuinely working them out.

load more comments (15 replies)
[–] flandish@lemmy.world 18 points 4 days ago

stochastic parrots. all of them. just upgraded “soundex” models.

this should be no surprise, of course!

[–] ZILtoid1991@lemmy.world 11 points 4 days ago (3 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 16 points 4 days ago* (last edited 4 days ago) (2 children)

This has been known for years, this is the default assumption of how these models work.

You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

load more comments (2 replies)
[–] LonstedBrowryBased@lemm.ee 12 points 4 days ago (14 children)

Yah of course they do they’re computers

load more comments (14 replies)
[–] atlien51@lemm.ee 14 points 4 days ago (2 children)

Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

🫢

load more comments (2 replies)
[–] mfed1122@discuss.tchncs.de 13 points 4 days ago* (last edited 4 days ago) (14 children)

This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

load more comments (14 replies)
load more comments
view more: ‹ prev next ›