this post was submitted on 07 Aug 2023
503 points (97.7% liked)

Science

13166 readers
89 users here now

Subscribe to see new publications and popular science coverage of current research on your homepage


founded 5 years ago
MODERATORS
 

Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

top 50 comments
sorted by: hot top controversial new old
[–] Yoruio@lemmy.ca 59 points 1 year ago (3 children)

Was this AI trained on an unbalanced dataset (only black folks?)

It's probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.

It's a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.

It's really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You'll make more mistakes when trying to identify people of races you aren't as familiar with too.

The problem is when the police use these tools as an authoritative matching algorithm.

[–] LetterboxPancake@sh.itjust.works 11 points 1 year ago (3 children)

It's not only growing up with them. We're just better identifying people/animals/things we're familiar with. Horses all look the same if you're not around them regularly. You can distinguish colours, but that's it.

Not comparing people to horses by the way...

load more comments (2 replies)
[–] lntl@lemmy.ml 3 points 1 year ago (1 children)

I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it's best use would be something like an anonymous tip line that leads to investigation.

[–] Yoruio@lemmy.ca 3 points 1 year ago

Using mugshots to train AI without consent feels illegal. Plus, it wouldn't even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.

[–] gramathy@lemmy.ml 2 points 1 year ago

Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.

[–] DavidGarcia@feddit.nl 32 points 1 year ago (3 children)

Putting any other issues aside for a moment, I'm not saying they're not true also. Cameras need light to make photos, the more light they get, the better the image quality. Just look at astronomy, we don't find the dark astetoids/planets/stars first, we find the ones that are the brightest and we know more about them than about a planet with lower albedo/light intensity. So it is literally physically harder to collect information about anything black, that includes black people. If you have a person with a skin albedo of 0.2 vs one with 0.6, you get 3x less information in the same amount of time all things being equal.

And also consider that cameras have a limited dyanmic range and white skin might often be much closer to most objects around us than black skin. So if the facial features of the black person might fall out of the dynamic range of the camera and be lost.

The real issue with these AIs is that they aren't well calibrated, meaning the output confidence should mirror how often predictions are correct. If you get a 0.3 prediction confidence, among 100 predictions 30 of them should be correct. Then any predictions lower than 90% or so should be illegal for the police to use, or something like that. Basically the model should tell you that it doesn't have enough information and the police should appropriately act on that information.

I mean really facial recognition should be illegal for the police to use, but that's besides the point.

[–] some_guy@lemmy.sdf.org 6 points 1 year ago

I don't know that facial recognition should be illegal for cops to use (though I don't want them using it, overall), but there should be guardrails in place that prevent them from using it as anything more than "let's look into this person further."

Put differently, a report of a certain model car of a certain color can tip them off to investigate someone driving such a car. It isn't a reason to arrest that person.

[–] isthismanas_droid@lemdro.id 3 points 1 year ago (1 children)

Exactly! I don't think any programmer would intentionally go out of their way to make it so that only the people with dark skin tones are matched from the database. It has got something to do with how it is not easy to detect facial features on a darker skin tone. The image vectors will have noisy information per pixel and the pixel intensities will be similar in some patches of the image because of the darker skin tone. But that's just my unbiased programmer's way of thinking. Let's hope the world is still beautiful !We are all humans afterall

[–] lntl@lemmy.ml 2 points 1 year ago

Yes, there are technical challenges when implementing an AI solution such as this one. From a leadership perspective; however, arrests cannot be made on AI predictions alone. They would be best used like an anonymous tip line that leads to further investigation, but not ever directly to an arrest.

[–] mohKohn@kbin.social 23 points 1 year ago (1 children)

12 people. we're talking about 12 people, so any conclusions are suspect. that being said, facial recognition struggling with black faces from insufficient data is an extremely common problem, so it'd be unsurprising

[–] lntl@lemmy.ml 4 points 1 year ago

That's exactly my idea on media sensationalism. It's really not a large sample. Way more people have been arrested and imprisoned by the justice system without any AI involvement.

[–] dbilitated@aussie.zone 22 points 1 year ago (1 children)

to be fair that seems to happen without AI too 😒

[–] lntl@lemmy.ml 2 points 1 year ago

Then the title would read:

In every reported case where police mistakenly arrested someone, that person has been Black

Yeah, that could be the case

[–] DessertStorms@kbin.social 20 points 1 year ago (2 children)

It's amazing how hard some people will work to deny that demonstrable biases influenced by the society we live in, exist in and massively impact science and technology, as if they are above such things, while literally demonstrating their own biases.

[–] hh93@lemm.ee 6 points 1 year ago (2 children)

I always wonder if the people that are so hard against systemic/structural racism are really thinking that they are being oppressed if someone tries to address that or if they are fully aware of the advantages they have just because they are born with the "right" skincolor in the right neighbourhoods and are against it for purely egoistic reasons because they don't want to lose that advantage

load more comments (1 replies)
load more comments (1 replies)
[–] EinfachUnersetzlich@lemm.ee 13 points 1 year ago* (last edited 1 year ago) (2 children)

Just a note that it appears this is USA only. No comment on differences around the world (if the technology is used elsewhere).

[–] buwho@lemmy.ml 20 points 1 year ago

In the documentary Coded Bias, the police in England were utilizing new AI facial recognition technology to find criminals on the streets and yes they were almost all black. The documentary states that it is because the models were trained on mostly white people so it couldn't differentiate black peoples features as well. Or something to that affect.

[–] lntl@lemmy.ml 2 points 1 year ago

I imagine that in China the headline would read:

In every reported case where police mistakenly arrested someone using facial recognition, that person has been Chinese

[–] nieceandtows@programming.dev 12 points 1 year ago (6 children)

I don’t think this is some systematic racism. Rather, it’s the technology itself that’s lacking. I remember even those motion activated bathroom sinks had problem working well with black hands. I think they’re just not good enough at differentiating between darkness and black skin.

[–] vrighter@discuss.tchncs.de 9 points 1 year ago (2 children)

haha this is reminding me of an episode of Better off Ted, where they replaced all sensors with optical based ones that did not recognize black people. Their solution was to hire white guys to follow them around to open doors and turn on lights for them

[–] nieceandtows@programming.dev 2 points 1 year ago

That’s hilarious

load more comments (1 replies)
[–] cobra89@beehaw.org 8 points 1 year ago* (last edited 1 year ago) (1 children)

IMO, the fact that the models aren't accurate with people of color but they're putting the AI to use for them anyway is the systemic racism. If the AI were not good at identifying white people do we really think it would be in active use for arresting people?

It's not the fact that the technology is much much worse at identifying people of color that is the issue, it's the fact that it's being used anyway despite it.

And if you say "oh, they're just being stupid and didn't realize it's doing that " then it's egregious that they didn't even check for that.

[–] nieceandtows@programming.dev 2 points 1 year ago

That part I can agree with. These issues should have been fixed before it was rolled out. The fact that they don’t care is very telling.

[–] DessertStorms@kbin.social 7 points 1 year ago

it’s the technology itself that’s lacking.

the technology is designed by people, people who didn't consider those with dark skin and so designed a technology that is lacking.
Lets not act as if technology just springs spontaneously in to being.

[–] lntl@lemmy.ml 3 points 1 year ago

I think it is come systematic racism. AI didn't arrest this person, police officers did. They did no further investigation before making the arrest because they didn't have to: the person has black skin. Case closed.

[–] nxfsi@lemmy.world 12 points 1 year ago (2 children)

Reminder that Google still hasn't actually fixed the issue in which their image recognition algorithm mislabels black people as gorillas

[–] s20@lemmy.ml 7 points 1 year ago

This is my surprised face. 😶

[–] bibliotectress@lemmy.world 5 points 1 year ago (2 children)
load more comments (2 replies)
[–] Boggy@lemmy.world 1 points 1 year ago (4 children)
[–] rikudou@lemmings.world 5 points 1 year ago (3 children)

AI can't be racist. It has bad training.

load more comments (3 replies)
[–] DessertStorms@kbin.social 3 points 1 year ago

The world of information the AI was trained on is racist, making the AI another way to perpetuate systemic racism, yes.

[–] lntl@lemmy.ml 1 points 1 year ago

Sort of yeah. It's like a mirror, it reflects what its presented.

load more comments (1 replies)
load more comments
view more: next ›