this post was submitted on 26 Oct 2023
206 points (100.0% liked)

chapotraphouse

13545 readers
872 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Gossip posts go in c/gossip. Don't post low-hanging fruit here after it gets removed from c/gossip

founded 3 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] StellarTabi@hexbear.net 34 points 1 year ago (3 children)

Unrelated, but I predict there will be more false accusations of AI generated news images than actual misinformation in the near future.

[–] BodyBySisyphus@hexbear.net 22 points 1 year ago

Love to live in the era of epistemic breakdown

[–] invalidusernamelol@hexbear.net 13 points 1 year ago (1 children)

They'll claim it, but it's actually still easy to determine if an image is AI generated with minimal effort.

Legitimate images will have a source and knowing the source will allow you to validate things like meta data and location/time the image was taken.

AI is really only useful for entirely synthetic images.

[–] Sphks@lemmy.dbzer0.com 8 points 1 year ago (1 children)

Average people don't care. Otherwise, Fox News would not exist.

That's a bit of a misanthropic viewpoint. Sure people will believe what they want, but AI images aren't going to convince anyone who wasn't already convinced, and they definitely will never serve as anything more than very temporary smokescreens that instantly betray the legitimacy of whoever uses them.

[–] drhead@hexbear.net 7 points 1 year ago* (last edited 1 year ago)

It's already that way, from what I can tell.

AI classifier models are garbage. Most of them are only particularly good at identifying images processed through a specific model's autoencoder, which if you don't specifically try to mask that (which is possible) they have a fairly high recall rate on. They have MASSIVE false positive rates though with a variety of known and unknown triggers, in particular I've seen a lot of images which upon closer inspection looked plausibly real if you consider how fucking awful postprocessing on some cameras can be.

And it's not even images that would make sense to AI generate that people are pulling this on. I would think that you would ~~pull the AI generated card on~~ AI generate propaganda images of something that is incredibly damning yet also hard to disprove. But most of the claims for "AI-generated" propaganda images I see are over things that don't really prove the claim the propagandist is trying to make, or that don't even show anything particularly abnormal. That's more than just falsely assuming something, that's just outright failing to understand how propaganda works in the first place which is a much more serious problem.