this post was submitted on 17 Feb 2024
9 points (100.0% liked)

News

3 readers
4 users here now

Breaking news and current events worldwide.

founded 1 year ago
 

Google, Meta, Microsoft, OpenAI and TikTok outline methods they will use to try to detect and label deceptive AI content

top 11 comments
sorted by: hot top controversial new old
[–] bedrooms@kbin.social 3 points 9 months ago

Err... ChatGPT detectors are like 50% accurate... These "reasonable precautions" translate to "we'll try, but there's nothing we can really do."

[–] SolacefromSilence@kbin.social 3 points 9 months ago

Can't have "AI" influencing people, you need to purchase their ads if you want that.

[–] athos77@kbin.social 3 points 9 months ago

Aka, if we pretend to vaguely do something with no consequences for not following through, we can argue that we're responsive and self-regulating, and hopefully avoid real regulation with teeth.

[–] Treczoks@kbin.social 2 points 9 months ago

Unless there is a hard law with harsh and damning penalties, they will do exactly fuck.

[–] henfredemars@infosec.pub 2 points 9 months ago (2 children)

I guess having ideas about what could be done to address this problem is better than nothing. None of these organizations have demonstrated the capability to actually prevent abuse of AI and proliferation of disinformation.

[–] livus@kbin.social 1 points 9 months ago (1 children)

@henfredemars I'm not sure they have much willingness either, Meta in particular, but I guess this is better than nothing.

[–] sbv@sh.itjust.works 2 points 9 months ago

In some senses it's worse: they're making a half assed effort to sElF rEgUlAtE so governments don't pass laws to limit what they can do.

This is the menthol cigarette of AI regulation.

[–] Anticorp@lemmy.world 1 points 9 months ago

Maybe they can ask the AI how to prevent abuse.

I am 100% sure that their measures will not only be at best marginally effective, but also that they’ll drop the measures at some point because “they’re unprofitable”.

[–] NarrativeBear@lemmy.world 2 points 9 months ago* (last edited 9 months ago) (1 children)

IMO branding all this stuff as AI is an issue, this stuff is just chatbots and image generators still at this point. None of it is actually "intelligence" in a sense. It's like saying autocorrect on your phone is AI.

All this random generated "gibberish" should be watermarked digitally where it's embedded in the image. This way platforms can detect and alert the image is not verified/real. Like this photo I just took.

1000008124

[–] notfromhere@lemmy.ml 2 points 9 months ago

Watermarking is a flawed argument and would only serve the incumbent corporations who have products, fucking over any open source projects or researchers.