this post was submitted on 27 May 2024
1101 points (98.1% liked)

Technology

59378 readers
3456 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

you are viewing a single comment's thread
view the rest of the comments
[–] AFC1886VCC@reddthat.com 34 points 5 months ago (3 children)

I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

[–] some_designer_dude@lemmy.world 22 points 5 months ago (1 children)

But we aren’t intelligent without human training, either…

[–] aBundleOfFerrets@sh.itjust.works 16 points 5 months ago (3 children)

Never been tested due to ethical constraints

[–] SkyeStarfall@lemmy.blahaj.zone 9 points 5 months ago

Kind of has been, not in a scientific manner, but there's the whole phenomenon of "feral human".

[–] PlexSheep@infosec.pub 6 points 5 months ago (1 children)

There have been very unethical experiments

[–] aBundleOfFerrets@sh.itjust.works 3 points 5 months ago

Sure, but this one hasn’t been done, and if you walk up to a researcher and ask “y no lock bby in white box” they will tell you to leave and might even call the cops if you seemed particularly determined

[–] MonkderDritte@feddit.de 2 points 5 months ago (1 children)

There are examples of children raised by other animals.

[–] aBundleOfFerrets@sh.itjust.works 1 points 5 months ago

Not exactly a scientific setting, and you can’t rule out the effects of abuse on these children

[–] Mkengine@feddit.de 5 points 5 months ago (1 children)

Isn't there already the term AGI for that?

[–] gentooer@programming.dev 3 points 5 months ago

Yes, and the researchers I know doing stuff with AI find the idea of AGI laughable.

[–] bss03@infosec.pub 1 points 5 months ago

The academy has been using the term "AI" for a while now for things that are much less sophisticated than the current/popular generation of media generators. I took an "Artificial Intelligence" class as part of my undergrad around the turn of the century.

It is confusing though, since sentience and intelligence are synonyms in the right context, but no AI has shown any good evidence of being a non-human sentient being.