this post was submitted on 04 Apr 2025
363 points (88.4% liked)

Technology

69101 readers
2360 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] ICastFist@programming.dev 8 points 2 weeks ago

Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. "Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95," the MIT article explains.

But here's the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, "I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95." But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

[–] Not_mikey@slrpnk.net 8 points 2 weeks ago (2 children)

Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

If the llm already knows the full sentence it's going to output from the first word it "guesses" I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

[–] angrystego@lemmy.world 5 points 2 weeks ago

I don't think it knows the full sentence, it just doesn't search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.

load more comments (1 replies)
[–] pennomi@lemmy.world 7 points 2 weeks ago (1 children)

This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

load more comments (1 replies)
[–] SplashJackson@lemmy.ca 5 points 2 weeks ago (1 children)
[–] nilclass@discuss.tchncs.de 4 points 2 weeks ago

You can become one too! Get your certification here https://mt.cert.ccc.de/

[–] Bell@lemmy.world 5 points 2 weeks ago

How can i take an article that uses the word "anywho" seriously?

[–] moonlight@fedia.io 4 points 2 weeks ago (6 children)

The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing "brain surgery" to short circuit the learned arithmetic process and replace it.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›