818
submitted 3 months ago by RGB@group.lt to c/technology@lemmy.world

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

you are viewing a single comment's thread
view the rest of the comments
[-] balder1991@lemmy.world 3 points 3 months ago* (last edited 3 months ago)

I don’t even think it’s correct to say it’s querying anything, in the sense of a database. An LLM predicts the next token with no regard for the truth (there’s no sense of factual truth during training to penalize it, since that’s a very hard thing to measure).

Keep in mind that the same characteristic that allows it to learn the language also allows it to sort of come up with facts, it’s just a statistical distribution based on the whole context, which needs a bit randomness so it can be “creative.” So the ability to come up with facts isn’t something LLMs were designed to do, it’s just something we noticed that happens as it learns the language.

So it learned from a specific dataset, but the measure of whether it will learn any information depends on how well represented it is in that dataset. Information that appears repeatedly in the web is quite easy for it to answer as it was reinforced during training. Information that doesn’t show up much is just not gonna be learned consistently.[1]

[1] https://youtu.be/dDUC-LqVrPU

[-] atrielienz@lemmy.world 1 points 3 months ago

I understand the gist but I don't mean that it's actively like looking up facts. I mean that it is using bad information to give a result (as in the information it was trained on says 1+1 =5 and so it is giving that result because that's what the training data had as a result. The hallucinations as they are called by the people studying them aren't that. They are when the training data doesn't have an answer for 1+1 so then the LLM can't do math to say that the next likely word is 2. So it doesn't have a result at all but it is programmed to give a result so it gives nonsense.

[-] balder1991@lemmy.world 2 points 3 months ago* (last edited 3 months ago)

Yeah, I think the problem is really that language is ambiguous and the LLMs can get confused about certain features of it.

For example, I often ask different models when was the Go programming language created just to compare them. Some say 2007 most of the time and some say 2009 — which isn’t all that wrong, as 2009 is when it was officially announced.

This gives me a hint that LLMs can mix up things that are “close enough” to the concept we’re looking for.

this post was submitted on 25 May 2024
818 points (97.7% liked)

Technology

58123 readers
4435 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS