this post was submitted on 26 Feb 2025
769 points (98.6% liked)

Technology

63313 readers
4211 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Update: After this article was published, Bluesky restored Kabas' post and told 404 Media the following: "This was a case of our moderators applying the policy for non-consensual AI content strictly. After re-evaluating the newsworthy context, the moderation team is reinstating those posts."

Bluesky deleted a viral, AI-generated protest video in which Donald Trump is sucking on Elon Musk’s toes because its moderators said it was “non-consensual explicit material.” The video was broadcast on televisions inside the office Housing and Urban Development earlier this week, and quickly went viral on Bluesky and Twitter.

Independent journalist Marisa Kabas obtained a video from a government employee and posted it on Bluesky, where it went viral. Tuesday night, Bluesky moderators deleted the video because they said it was “non-consensual explicit material.”

Other Bluesky users said that versions of the video they uploaded were also deleted, though it is still possible to find the video on the platform.

Technically speaking, the AI video of Trump sucking Musk’s toes, which had the words “LONG LIVE THE REAL KING” shown on top of it, is a nonconsensual AI-generated video, because Trump and Musk did not agree to it. But social media platform content moderation policies have always had carve outs that allow for the criticism of powerful people, especially the world’s richest man and the literal president of the United States.

For example, we once obtained Facebook’s internal rules about sexual content for content moderators, which included broad carveouts to allow for sexual content that criticized public figures and politicians. The First Amendment, which does not apply to social media companies but is relevant considering that Bluesky told Kabas she could not use the platform to “break the law,” has essentially unlimited protection for criticizing public figures in the way this video is doing.

Content moderation has been one of Bluesky’s growing pains over the last few months. The platform has millions of users but only a few dozen employees, meaning that perfect content moderation is impossible, and a lot of it necessarily needs to be automated. This is going to lead to mistakes. But the video Kabas posted was one of the most popular posts on the platform earlier this week and resulted in a national conversation about the protest. Deleting it—whether accidentally or because its moderation rules are so strict as to not allow for this type of reporting on a protest against the President of the United States—is a problem.

you are viewing a single comment's thread
view the rest of the comments
[–] rottingleaf@lemmy.world -3 points 4 hours ago (1 children)

Moderation should be optional .

Say, a message may have any amount of "moderating authority" verdicts, where a user might set up whether they see only messages vetted by authority A, only by authority B, only by A logical-or B, or all messages not blacklisted by authority A, and plenty of other variants, say, we trust authority C unless authority F thinks otherwise, because we trust authority F to know things C is trying to reduce in visibility.

Filtering and censorship are two different tasks. We don't need censorship to avoid seeing CSAM. Filtering is enough.

This fallacy is very easy to encounter, people justify by their unwillingness to encounter something the need to censor it for everyone as if that were not solvable. They also refuse to see that's technically solvable. Such a "verdict" from moderation authority, by the way, is as hard to do as an upvote or a downvote.

For a human or even a group of humans it's hard to pre-moderate every post in a period of time, but that's solvable too - by putting, yes, an AI classifier before humans and making humans check only uncertain cases (or certain ones someone complained about, or certain ones another good moderation authority flagged the opposite, you get the idea).

I like that subject, I think it's very important for the Web to have a good future.

[–] CarbonBasedNPU@lemm.ee 4 points 4 hours ago (1 children)

people justify by their unwillingness to encounter something the need to censor it for everyone...

I can't engage in good faith with someone who says this about CSAM.

Filtering and censorship are two different tasks. We don’t need censorship to avoid seeing CSAM. Filtering is enough.

No it is not. People are not tagging their shit properly when it is illegal.

[–] rottingleaf@lemmy.world -1 points 4 hours ago

I can't engage in good faith

Right, you can't.

If someone posts CSAM, police should get their butts to that someone's place.

No it is not. People are not tagging their shit properly when it is illegal.

What I described doesn't have anything to do with people tagging what they post. It's about users choosing the logic of interpreting moderation decisions. But I've described it very clearly in the previous comment, so please read it or leave the thread.