this post was submitted on 29 Jul 2023
24 points (100.0% liked)

Technology

37712 readers
173 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] baseless_discourse@mander.xyz 19 points 1 year ago* (last edited 1 year ago)

Right, since letting big tech claim ownership of their stuff online has never gone wrong ever, cough cough, DRM.

[–] GeneralRetreat@beehaw.org 8 points 1 year ago

since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate.

Interesting approach, but I can't help but feel the actual utility is fairly limited. For example, I could see it being useful for large corporate creative studios that have contractual / union agreements that govern AI content usage.

If they're using enterprise tools that build in C2PA, it'd give them a metadata audit trail showing exactly when and where AI was used.

That's completely useless in the context where AI content flagging is most useful though. As the quote says, this provenance data is applied at the point of creation, and in a world where there are open source branches of generation models, there's no way to ensure provenance tagging is built in.

This technology is most needed to combat AI powered misinformation campaigns, when that is the use case this is least able to address.

[–] vhstape@beehaw.org 5 points 1 year ago

I really like this idea, but I don't think it should be opt-in. Generative AI tools have such a high potential for misuse that some form of provenance should be baked into the network architecture

load more comments
view more: next ›