this post was submitted on 08 Jun 2025
823 points (95.6% liked)

Technology

71217 readers
3279 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

top 50 comments
sorted by: hot top controversial new old
[–] burgerpocalyse@lemmy.world 3 points 7 hours ago

hey I cant recognize patterns so theyre smarter than me at least

[–] SoftestSapphic@lemmy.world 96 points 1 day ago (5 children)

Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

[–] aidan@lemmy.world 2 points 7 hours ago

And engineers who stood to make a lot of money

[–] zbk@lemmy.ca 22 points 1 day ago

This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

[–] technocrit@lemmy.dbzer0.com 4 points 1 day ago

It's hard to to be heard when you're buried under all that sweet VC/grant money.

load more comments (2 replies)
[–] technocrit@lemmy.dbzer0.com 28 points 1 day ago* (last edited 1 day ago) (1 children)

Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

[–] x0x7@lemmy.world 7 points 1 day ago* (last edited 1 day ago) (1 children)

Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I've given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

[–] I_Has_A_Hat@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

load more comments (1 replies)
[–] billwashere@lemmy.world 49 points 1 day ago (7 children)

When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

[–] x0x7@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (1 children)

Intuition is about the only thing it has. It's a statistical system. The problem is it doesn't have logic. We assume because its computer based that it must be more logic oriented but it's the opposite. That's the problem. We can't get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn't mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn't guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don't and we can't get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

[–] Slaxis@discuss.tchncs.de 1 points 7 hours ago

You had a compelling description of how ML models work and just had to swerve into politics, huh?

[–] StereoCode@lemmy.world 2 points 20 hours ago

You'd think the M in LLM would give it away.

[–] NotASharkInAManSuit@lemmy.world 5 points 1 day ago (2 children)

People think they want AI, but they don’t even know what AI is on a conceptual level.

[–] Buddahriffic@lemmy.world 4 points 21 hours ago (1 children)

They want something like the Star Trek computer or one of Tony Stark's AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say "model solved" or they can show a test simulation where the ship doesn't explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).

AIs that are smarter than us but for some reason don't replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).

load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 4 points 1 day ago* (last edited 1 day ago) (2 children)

Yeah I often think about this Rick N Morty cartoon. Grifters are like, "We made an AI ankle!!!" And I'm like, "That's not actually something that people with busted ankles want. They just want to walk. No need for a sentient ankle." It's a real gross distortion of science how everything needs to be "AI" nowadays.

load more comments (2 replies)
load more comments (4 replies)
[–] Mniot@programming.dev 39 points 1 day ago

I don't think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called "complex") puzzles. Like Towers of Hanoi but with 25 discs.

The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don't have an answer for why this is, but they suspect that the reasoning doesn't scale.

[–] minoscopede@lemmy.world 65 points 1 day ago* (last edited 1 day ago) (13 children)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[–] Knock_Knock_Lemmy_In@lemmy.world 16 points 1 day ago (5 children)

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

load more comments (5 replies)
[–] technocrit@lemmy.dbzer0.com 6 points 1 day ago* (last edited 1 day ago)

There's probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

[–] REDACTED@infosec.pub 10 points 1 day ago* (last edited 1 day ago) (4 children)

What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

https://en.wikipedia.org/wiki/Reasoning_system

[–] stickly@lemmy.world 5 points 1 day ago

If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It's like comparing PhD reasoning to a dog's reasoning.

While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it's designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don't have the tech to make a synthetic human.

load more comments (3 replies)
[–] theherk@lemmy.world 14 points 1 day ago

Yeah these comments have the three hallmarks of Lemmy:

  • AI is just autocomplete mantras.
  • Apple is always synonymous with bad and dumb.
  • Rare pockets of really thoughtful comments.

Thanks for being at least the latter.

[–] Allah@lemm.ee 3 points 1 day ago

Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn't if AI can reason, but how its reasoning differs from ours.

load more comments (8 replies)
[–] melsaskca@lemmy.ca 9 points 1 day ago (1 children)

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

load more comments (1 replies)

XD so, like a regular school/university student that just wants to get passing grades?

[–] skisnow@lemmy.ca 26 points 1 day ago (1 children)

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

load more comments (1 replies)
[–] FreakinSteve@lemmy.world 20 points 1 day ago (4 children)

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

load more comments (4 replies)
load more comments
view more: next ›