this post was submitted on 12 Jun 2024
392 points (95.6% liked)

Technology

59329 readers
4933 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] fubarx@lemmy.ml 8 points 5 months ago (1 children)

They could make Siri change its voice and Genmoji based on the degree of certainty of the response:

  • Trust me: Arnold as Terminator 😎
  • Eehhhh, could be bullshit: shrugging old man meme 🤷🏻‍♂️
  • Just kiddin' here: whacky Jerry Lewis 🤪

They could sell different voice packages. Revive the ringtone market.

[–] onion@feddit.de 19 points 5 months ago (2 children)

The AI is confidently wrong, that's the whole problem. If there was an easy way to know if it could be wrong we wouldn't have this discussion

[–] AdrianTheFrog@lemmy.world 2 points 5 months ago

this paper tries to do that: arxiv.org/pdf/2404.04689

there are also several other techniques I think

[–] Ashyr@sh.itjust.works 1 points 5 months ago

While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.

At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.