Honestly, this can not be worse than some of the moderators.
I can't wait for it to ban the admins and nuke the shit subreddits that should have been shut own ages ago.
As long as the AI is capable enough, I don't see what's wrong with it, and I understand if Reddit decides to utilize AI for financial reasons. Though I don't know how capable the AI is, and it is certainly not perfect, but AI is a technology and it will improve over time. If a job can be automated, I don't see why it should not be automated.
AI is often only trained on neurotypical cishet white men. What happens when a community of colour is full of people who don't have the same conversational norms as white people, and the bot thinks they're harassing each other? What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called "robotic", will the AI feel the same way and ban them as bots? What happens when an AI is used to moderate a trans community, and flags everything as NSFW because its training data says "transgender" is a porn category?
I think it's a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.
Did you write your comment with chatgpt?
Nope, just my personality. I think i have grammar mistakes too.
Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.
AI is often only trained on neurotypical cishet white men.
Can you back up this claim? Unless you're just being an assumer, or you expect people to be suckers/gullible/"chrust" you.
What happens when a community of colour is full of people who don’t have the same conversational norms as white people
In this statement alone, there are not one but two instances of a racist discourse:
- Conflating culture (conversational norms) with race.
- Singling out "white people", but lumping together the others under the same label ("people of colour").
You are being racist. What you're saying there boils down to "those brown people act in weird ways because they're brown". Don't.
What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots?
The reason why autists are often called "robotic" has to do with voice prosody. It does not apply to text.
And the very claim that you're making - that autists would write in a way that an "AI" would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they're robotic.
[From another comment] Did you write your comment with chatgpt?
Passive aggressively attacking the other poster won't help.
Odds are that you're full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you're being racist and dehumanising.
The problem is the perverse incentives for “service”. Yes, ideally things that can be automated, should be. But what about when it’s insufficient, or can’t satisfy the customer, or is just worse service. Those cases will always exist, but will the companies provide an alternative?
We’re all familiar with voice menus and chatbots to provide customer service, and there are many cases where those provide service faster and cheaper than a human could. However what we remember is how useless they were that one time, and how much effort it was to escape that hell to talk to someone who can actually help.
If this AI is just better language recognition, or if it makes me type complete sentences, just to point me to the same useless FAQ yet again, I’ll scream
As long as the AI is capable enough
The model-based decision making is likely not capable enough. Specially not for the way that Reddit Inc. would likely use it - leaving it in charge of removing users and content assumed to be problematic, instead of flagging them for manual review.
I'm specially sceptic on the claim in the site that their Hive Moderation has "human-level accuracy". Specially over time - as people are damn smart when it comes to circumventing automated moderation. Also let us not forget that the human accuracy varies quite a bit, and you definitively don't want average accuracy, you want good accuracy.
Regarding the talk about biases, from another comment: models are prone to amplify biases, not just reproduce them. As such the model doesn't even need to be trained only in a certain cohort to be biased.
Automated spam detection has been around for decades. As trolls and spammers get more sophisticated, the technology to combat them will continue to evolve. I don't see any new situation to be surprised or concerned about. Of course any kind of content moderation system can be implemented poorly, but that's a different claim.
Is anyone surprised? I’d bet they are using ai powered bots to increase engagement and repost content.
Sacrifice it all for that incompetently inept incoming IPO.
As horrible as that seems, at least the AI might be impartial and non-partisan when it comes to levying bans, unlike Reddit admins who will ban you even if you didn't break any rules at all, as long as they disagree with your opinion.
My ten year old account was banned with no appeal for "report abuse". I literally reported once, a post that was not marked nsfw with images of dead children. Go figure.
r/popcorn?
News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
:::spoiler Rule 10- Majority of bots aren't allowed to participate here.