487
submitted 8 months ago by L4s@lemmy.world to c/technology@lemmy.world

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[-] RGB3x3@lemmy.world 53 points 8 months ago

A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

This is honestly fascinating. It's putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

There's a lot of learning to be done here and it would be sad to miss that opportunity.

[-] Eyck_of_denesle@lemmy.zip 0 points 8 months ago

How are you guys getting it to generate"persons". It simply says It's against my GOGLE AI PRINCIPLE to generate images of people.

[-] FinishingDutch@lemmy.world 2 points 8 months ago

They actually neutered their AI on thursday, after this whole thing blew up.

https://abcnews.go.com/Business/wireStory/google-suspends-gemini-chatbots-ability-generate-pictures-people-107446867

So right now, everyone's fucked because Google decided to make a complete mess of this.

[-] Eyck_of_denesle@lemmy.zip 1 points 8 months ago

Damn. It keeps saying sum dumb shit when asked for images now. I got here too kate :(

[-] echodot@feddit.uk 1 points 8 months ago

You can generate images of people just not actual real people. You cannot create an image in the likeness of a particular person but if you just put "people at work" it will generate images of humans.

[-] Buttons@programming.dev -1 points 8 months ago* (last edited 8 months ago)

It’s putting human biases on full display at a grand scale.

The skin color of people in images doesn't matter that much.

The problem is these AI systems have more subtle biases, ones that aren't easily revealed with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.

[-] intensely_human@lemm.ee 9 points 8 months ago

In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.

[-] rottingleaf@lemmy.zip -1 points 8 months ago* (last edited 8 months ago)

Well, humans are similar to pigs in the sense that they'll always find the stinkiest pile of junk in the area and taste it before any alternative.

EDIT: That's about popularity of "AI" today, and not some semantic expert systems like what they'd do with Lisp machines.

[-] kromem@lemmy.world -5 points 8 months ago

It's putting human biases on full display at a grand scale.

Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

[-] Ultraviolet@lemmy.world 3 points 8 months ago

Not human biases. Biases in the labeled data set.

Who made the data set? Dogs? Pigeons?

[-] kromem@lemmy.world 5 points 8 months ago

If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

Data can be biased in a number of ways, that don't always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn't necessarily straightforward.

[-] VoterFrog@lemmy.world 1 points 8 months ago

I mean "taking pictures of people who are smiling" is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

I get what you're saying in specific circumstances. Sure, a dataset that is built from a single source doesn't make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we've built a culture around.

[-] kromem@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

Except these kinds of data driven biases can creep in from all sorts of ways.

Is there a bias in what images have labels and what don't? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

Just because the sampling is broad doesn't mean the processes involved don't introduce procedural bias distinct from social biases.

this post was submitted on 22 Feb 2024
487 points (96.2% liked)

Technology

59137 readers
1763 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS