this post was submitted on 27 May 2024
1101 points (98.1% liked)

Technology

59404 readers
2021 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

you are viewing a single comment's thread
view the rest of the comments
[–] DarkThoughts@fedia.io 6 points 5 months ago

Not even that, it's an inherent issue of how LLMs work. The problem is also that systems have become so easy to use that people stop thinking for themselves. We already see that by zoomers and boomers having an eerily similar understanding of tech, vs millennials who contain a huge amount of pre mainstream tech nerds that grew up with this stuff - before it was easy to use. A regular search result still requires a user to kinda shift through them, but a LLM response is usually taken for granted and not even fact checked. It's typically not even possible to dissect the reply into its source tokens to figure out where the content of its information came from. So now that those things became easy enough for any idiot to just use them, it has been trivially easy to also just spread misinformation and potentially even disinformation if we apply actual malice.