this post was submitted on 17 May 2024
502 points (94.8% liked)

Technology

59329 readers
5008 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mindlesscrollyparrot@discuss.tchncs.de 8 points 6 months ago (1 children)

This seems to be a really long way of saying that you agree that current LLMs hallucinate all the time.

I'm not sure that the ability to change in response to new data would necessarily be enough. They cannot form hypotheses and, even if they could, they have no way to test them.

[–] UnpluggedFridge@lemmy.world -3 points 6 months ago (1 children)

My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.

[–] mindlesscrollyparrot@discuss.tchncs.de 5 points 6 months ago (1 children)

But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn't calculate. You or I can perform long multiplication if asked to, but the LLM can't (ironically, since the hardware it runs on is far better at multiplication than we are).

[–] UnpluggedFridge@lemmy.world 1 points 5 months ago

We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.