this post was submitted on 30 Jul 2024
958 points (97.9% liked)

Technology

59665 readers
3772 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

If you've watched any Olympics coverage this week, you've likely been confronted with an ad for Google's Gemini AI called "Dear Sydney." In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone.

"I'm pretty good with words, but this has to be just right," the father intones before asking Gemini to "Help my daughter write a letter telling Sydney how inspiring she is..." Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be "just like you."

I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, "Dear Sydney" presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.

Inserting Gemini into a child's heartfelt request for parental help makes it seem like the parent in question is offloading their responsibilities to a computer in the coldest, most sterile way possible. More than that, it comes across as an attempt to avoid an opportunity to bond with a child over a shared interest in a creative way.

you are viewing a single comment's thread
view the rest of the comments
[–] Eximius@lemmy.world 5 points 3 months ago (1 children)

Furthermore, lacking proficiency in any language and using a tool to "beautify" a paragraph in said language will generally fail to improve communication, because chatgpt is trying to infer and add information which just isnt there (details, connotations, phraseologisms). Will just add more garbage to the conversation, and most likely words and meanings that just arent yours.

[–] tempest@lemmy.ca 2 points 3 months ago (1 children)

It's fine. Eventually when people start using this crap en masse the people on the other end will just be using LLMs to distill the bullshit down to 3 key points anyway.

[–] Emmie@lemm.ee 1 points 3 months ago* (last edited 3 months ago) (1 children)

That would be bizarre, lol

Let’s say one person writes 3 pages with some key points, then another extracts modified points due to added llm garbage then sends them again in 2 page essay to someone else and they again extract modified points. Original message was long gone and failure to communicate occurred but bots talk to each other so to say further producing even more garbage

In the end we are drowning in humongous pile of generated garbage and no one can effectively communicate anymore

[–] Eximius@lemmy.world 2 points 3 months ago (1 children)

The funny thing is this is mostly true without LLMs or other bots. People and institutions cant communicate because of leviathan amounts of legalese, say-literally-nothing-but-hide-it-in-a-mountain-of-bullshitese, barely-a-correlation-but-inflate-it-to-be-groundbreaking-ese, literally-lie-but-its-too-complicatedly-phrased-nobody-can-call-false-advertising-ese.

What about using an LLM to extract actual EULA key points?

[–] Emmie@lemm.ee 1 points 3 months ago* (last edited 3 months ago)

I wouldn’t rely on LLM to read anything for you that matters. Maybe it will do ok nine out of ten times but when it fails you won’t even know until it is too late.

What if Eula itself was chat gpt generated from another chat generated output from another etc.. madness. Such Eula will be pure garbage suddenly and cutting costs no one will even notice relying on ai so much until it’s all fubar

So sure it will initially seem like a helpful tool, make key points from this text that was generated by someone from some other key points extracted by gpt but the mistakes will multiply in each iteration.