this post was submitted on 20 Jul 2023
249 points (96.6% liked)

Technology

59223 readers
3211 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] Veraticus@lib.lgbt 26 points 1 year ago* (last edited 1 year ago) (2 children)

LLMs act nothing like our brains and they aren't trained on facts.

LLMs are essentially complicated mathematical equations that ask “what makes the most sense as the next word following this one?” Think autosuggest on your phone taken to the extreme limit.

They do not think in any sense and have no knowledge or facts internal to themselves. All they do is compose words together.

And this is also why they’re garbage at math (and frequently lie, and why they can’t “remember” anything). They are simply stringing words together based on their model, not actually thinking. If their model shows that the next word after “one plus two equals” is more likely to be four than three, they will simply answer four.

[–] Silinde@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (1 children)

LLMs act nothing like our brains and are not neural networks

Err, yes they are. You don't even need to read a paper on the subject, just go straight to the Wikipedia page and it's right there in the first line. The 'T' in GPT is literally Transformer, you're highly unlikely to find a Transformer model that doesn't use an ANN at its core.

Please don't turn this place into Reddit by spreading misinformation.

[–] Veraticus@lib.lgbt 2 points 1 year ago

Edited, thanks!

[–] cyd@lemmy.world 2 points 1 year ago

"Nothing like our brains" may be too strong. I strongly suspect that much of human reasoning is little different from stringing words together, albeit with more complicated criteria than current LLMs. For example, children learn maths in a rather similar way, based on language and repeated exposure; humans don't have a built in maths processor in our brains.