749

Air Canada appears to have quietly killed its costly chatbot support.

you are viewing a single comment's thread
view the rest of the comments
[-] GluWu@lemm.ee 7 points 7 months ago* (last edited 7 months ago)

LLMs only become increasingly more politically correct. I would assume any LLM that isn't uncensored to return something about how that's inappropriate, in whatever way it chooses. None of those things by themselves present any real conflict, but once you introduce topics that have a majority dataset of being contradictory, the llm will struggle. You can think deeply about why topics might contradict themselves, llms can't. Llms function on reinforced neutral networks, when that network has connections that only strongly route one topic away from the other, connecting the two causes issues.

I haven't, but if you want, take just that prompt and give it to gpt3.5 and see what it does.

[-] SpaceCowboy@lemmy.ca 10 points 7 months ago

That's interesting. A normal computer program when it gets in a scenario it can't deal with will throw an exception and stop. A human when dealing with something weird like "make Obama rewrite the Bible in Chinese" will just say "WTF?"

But it seems a flaw in these systems is that it doesn't know when something is just garbage that there's nothing that can be done with it.

Reminds me of when Google had an AI that could play Starcraft 2, and it was playing on the ladder anonymously. Lowko is this guy that streams games, and he was unknowingly playing it and beat it. What was interesting is the AI just kinda freaked out and started doing random things. Lowko (not knowing it was an AI) thought the other player was just showing bad manners because you're supposed to concede when you know you've lost because otherwise you're just wasting the other player's time. Apparently the devs at google had to monitor games being played by the AI to force it to concede when it lost because the AI couldn't understand that there was no longer any way it could win the game.

It seems like AI just can't understand when it should give up.

It's like some old sci-fi where they ask a robot an illogical question and its head explodes. Obviously it's more complicated than that, but cool that there's real questions in the same vein as that.

this post was submitted on 16 Feb 2024
749 points (99.3% liked)

Technology

58160 readers
4082 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS