this post was submitted on 02 Dec 2024
113 points (88.4% liked)

Technology

59989 readers
2925 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Original USA Today story link

Is it a simple error that OpenAI has yet to address, or has someone named David Mayer taken steps to remove his digital footprint?

Social media users have noticed something strange that happens when ChatGPT is prompted to recognize the name, "David Mayer."

For some reason, the two-year-old chatbot developed by OpenAI is unable – or unwilling? – to acknowledge the name at all.

The quirk was first uncovered by an eagle-eyed Reddit user who entered the name "David Mayer" into ChatGPT and was met with a message stating, "I'm unable to produce a response." The mysterious reply sparked a flurry of additional attempts from users on Reddit to get the artificial intelligence tool to say the name – all to no avail.

It's unclear why ChatGPT fails to recognize the name, but of course, plenty of theories have proliferated online. Is it a simple error that OpenAI has yet to address, or has someone named David Mayer taken steps to remove his digital footprint?

Here's what we know:

What is ChatGPT? ChatGPT is a generative artificial intelligence chatbot developed and launched in 2022 by OpenAI.

As opposed to predictive AI, generative AI is trained on large amounts of data in order to identify patterns and create content of its own, including voices, music, pictures and videos.

ChatGPT allows users to interact with the chatting tool much like they could with another human, with the chatbot generating conversational responses to questions or prompts.

Proponents say ChatGPT could reinvent online search engines and could assist with research, information writing, content creation and customer service chatbots. However, the service has at times become controversial, with some critics raising concerns that ChatGPT and similar programs fuel online misinformation and enable students to plagiarize.

ChatGPT is also apparently mystifyingly stubborn about recognizing the name, David Mayer.

Since the baffling refusal was discovered, users have been trying to find ways to get the chatbot to say the name or explain who the mystery man is.

A quick Google search of the name leads to results about British adventurer and environmentalist David Mayer de Rothschild, heir to the famed Rothschild family dynasty.

Mystery solved? Not quite.

Others speculated that the name is banned from being mentioned due to its association with a Chechen terrorist who operated under the alias "David Mayer."

But as AI expert Justine Moore pointed out on social media site X, a plausible scenario is that someone named David Mayer has gone out of his way to remove his presence from the internet. In the European Union, for instance, strict privacy laws allow citizens to file "right to be forgotten" requests.

Moore posted about other names that trigger the same response when shared with ChatGPT, including an Italian lawyer who has been public about filing a "right to be forgotten" request.

USA TODAY left a message Monday morning with OpenAI seeking comment.

you are viewing a single comment's thread
view the rest of the comments
[–] AbouBenAdhem@lemmy.world 39 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Is it a simple error that OpenAI has yet to address

Since even their own creators can’t understand or control LLMs at that level of granularity, that seems to go without saying. Although calling it an “error” implies that there’s some criterion for defining correct behavior, which no one has yet agreed on.

or has someone named David Mayer taken steps to remove his digital footprint

Sure—someone gained a level of control over ChatGPT beyond that of its own developers, and used that power to prevent it from inventing gossip about himself. (ChatGPT output isn’t even a digital footprint, it’s a digital hallucination.)

A quick Google search of the name leads to results about British adventurer and environmentalist David Mayer de Rothschild...

Oh, FFS.

[–] missingno@fedia.io 32 points 2 weeks ago (2 children)

A few other names have been discovered that ChatGPT also will not output, and none of them seem to be anyone special.

I think the most plausible explanation is that these individuals filed a Right to be Forgotten request, and rather than actually scrubbing any data, OpenAI's kludge was to simply have the frontend throw an error any time the LLM would output a forbidden name. I doubt this is anything happening within the LLM, just a filter on its output.

[–] WalnutLum@lemmy.ml 6 points 2 weeks ago

This is much more likely as LLMs typically only respond that they can't formulate a response when there's an error in the prompt pre-handling systems.

[–] unnamedau@lemmy.ca 2 points 2 weeks ago

a friend managed to confirm that it's something with the moderation they put on top of the ui, the api doesn't have any problem saying any of the names people have mentioned

[–] wondrous_strange@lemmy.world 4 points 2 weeks ago

Or they could have just put another layer between the prompt and gpt that does simple filtering. The output seems to be the same every time no?

[–] Windex007@lemmy.world 4 points 2 weeks ago (1 children)

I don't think it's at all required that someone gained "a level of control". I think the mechanism more likely at play (if this is the root cause) is that some training data included news articles about how these people wanted to remove their presence, and the articles were talking about the legality and morality around it.

[–] AbouBenAdhem@lemmy.world 3 points 2 weeks ago

Presumably there was something in the training data that caused this pattern. The difficulty is in predicting beforehand what the exact effect of changes to the training data will be.

[–] yesman@lemmy.world 3 points 2 weeks ago (1 children)

A quick Google search of the name leads to results about British adventurer and environmentalist David Mayer de Rothschild…

This is likely the explanation. GPT has some kind of bug in the antisemitism filter.