this post was submitted on 13 Jun 2025
170 points (97.2% liked)

Fuck AI

3055 readers
619 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Title says it all

top 43 comments
sorted by: hot top controversial new old
[–] TheFriar@lemm.ee 6 points 53 minutes ago

If I were you I’d send this to some media outlets. Tank some AI stock and create some more negative news around it.

[–] HappyFrog@lemmy.blahaj.zone 5 points 2 hours ago

I mean... Ben 10 is in the corner there...

[–] jsomae@lemmy.ml 31 points 10 hours ago (1 children)

there's plausible denia... nah i got nothin. That's messed up. Even for the most mundane, non-gross use case imaginable, why the fuck would anybody need a creepy digital facsimile of a child?

[–] ckmnstr@lemmy.world 21 points 10 hours ago (1 children)

I mean, maaaybe if you wanted children and couldn't have them. But why would it need to be "beautiful and up for anything"?

[–] jsomae@lemmy.ml 2 points 56 minutes ago

"beautiful and up for anything" is incredibly suggestive phrasing. It's an exercise in mental creativity to make it sound not creepy. But I can imagine a pleasant grandma (always the peak of moral virtue in any thought experiment) saying this about her granddaughter. I don't mean to say I have heard this, only that I can imagine it. Barely.

[–] viciouslyinclined@lemmy.world 15 points 10 hours ago (1 children)

And the bot has 882.9k chats.

Im not surprised and I dont think you or anyone else is either. But that doesn't make this less disturbing.

Im sure thw app devs are not interested in cutting off a huge chunk of their loyal users by doing the right thing and getting rid of those types of bots.

Yes, its messed up. In my experience, it is difficult to report chat bots and see any real action taken as a result.

[–] noodlesreborn@lemmy.world 6 points 8 hours ago

Ehhh nah. As someone who used character.ai before there are many horrible bots that get cleared and the bots have been impossible to have sex with unless you get really creative. The most horrendous ones get removed quite a bit and were consistently reposted. I'm not here to shield a big company or anything, but the "no sex" thing was a huge thing in the community and they always fought with the devs about it.

They're probably trying to hide behind the veil of more normal bots now, but I struggle to imagine how they'd get it to do sexual acts, when some lightly violent RPs I tried to do got censored. It's pretty difficult, and got worse over time. Idk though, I stopped using it a while ago.

[–] belastend@lemmy.dbzer0.com 25 points 12 hours ago

Stop using an app that allows this shit.

[–] napkin2020@sh.itjust.works 10 points 12 hours ago (1 children)

ready for anything? That's fucking revolting.

[–] viciouslyinclined@lemmy.world 4 points 10 hours ago

They definitely knew who they were targeting when they made this. I only hope that, if those predators simply must text with a child, they keep talking to an ai bot rather than a real child.

[–] FluorideMind@lemmy.world 6 points 12 hours ago (2 children)

As gross as it is. Let the weirdos get it out with ai instead of being weird to real people.

[–] nullroot@lemmy.world 4 points 11 hours ago

Hundred percent. It feels pretty fucking thought-crimey to vilify the people who use these services.

[–] ckmnstr@lemmy.world 3 points 10 hours ago (1 children)

I agree in principle, but look at the number of interactions. I think there's a fine line between creating safe spaces for urges and downright promoting and normalizing criminal activity. I don't think this should be a) this accessible and b) happening without psychiatric supervision. But maybe I'm being too judgemental

[–] SCmSTR@lemmy.blahaj.zone 2 points 9 hours ago (1 children)

Total aside, what does your username mean/stand for?

[–] ckmnstr@lemmy.world 5 points 8 hours ago (1 children)
[–] Redex68@lemmy.world 3 points 6 hours ago

CocKMoNSTeR

[–] SassyRamen@lemmy.world 60 points 20 hours ago (1 children)
[–] ckmnstr@lemmy.world 35 points 20 hours ago

I don't even know why I'm shocked..

[–] ZDL@lazysoci.al 14 points 16 hours ago (1 children)

Yes it's what you think it is. I don't think, however, that there is anywhere to report it that will care enough to do something about it.

[–] TheFriar@lemm.ee 1 points 54 minutes ago

News outlets.

[–] Ceedoestrees@lemmy.world 32 points 19 hours ago (1 children)

Yep. I dick around on a similar platform because a friend built it.

The amount of shit I've reported is insane. Pedos just keep coming back with new accounts. Even with warnings and banned words, they find a way.

[–] aramova@infosec.pub 56 points 19 hours ago* (last edited 19 hours ago) (2 children)

Yep. I dick around

Very poor choice of words.

[–] Ceedoestrees@lemmy.world 1 points 1 hour ago

Not all dicks are adjacent to children.

[–] DeathsEmbrace@lemmy.world 15 points 17 hours ago

Its ok its for harambe trust

[–] 01189998819991197253@infosec.pub 12 points 16 hours ago

What. The. Bloody. Fuck.

[–] Blaster_M@lemmy.world 4 points 13 hours ago

Fire at will!

[–] Xanthrax@lemmy.world 19 points 19 hours ago (1 children)
[–] spankmonkey@lemmy.world 32 points 19 hours ago

Just a friendly childlike free spririt ready to talk about girl stuff!

/s for real though, it is totally the evil thing

[–] Lyra_Lycan@lemmy.blahaj.zone 13 points 19 hours ago (2 children)

I've got a couple ads for an AI chat on Android, can't remember the name but it has a disclaimer onscreen that reads something like "All characters shown are in their grown-up form", implying that there are teen or child forms that you can communicate with.

[–] dil@lemmy.zip 4 points 16 hours ago

nah that likely implies that the children form was rejected by censors so its an adult version now

[–] ckmnstr@lemmy.world 6 points 19 hours ago (1 children)

I saw something similar! Reported it to Google ads and of course they "couldn't find any ToS violations"

[–] 01189998819991197253@infosec.pub 4 points 16 hours ago* (last edited 16 hours ago)

As long as they get paid, there's no TOS violation. Bloody wankers

Edit: changed a word to make it less vile

[–] you_are_dust@lemmy.world 11 points 19 hours ago

I've messed around with some of these apps out of curiosity of where the technology is. There's typically a report function in the app. You can probably report that particular bot from within the app to try and get that bot deleted. Reporting the app itself probably won't do much.

[–] bdonvr@thelemmy.club 7 points 17 hours ago (1 children)

Unfortunately in a lot of places there's really nothing illegal if it's just fantasy and text.

why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you "protect" the chatbot? would you "protect" the good-think of the users? do you think it's about preventing "normalization" of these thoughts?

in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.

[–] zipzoopaboop@lemmynsfw.com 1 points 12 hours ago
[–] hendrik@palaver.p3x.de 6 points 19 hours ago* (last edited 19 hours ago) (2 children)

If you suspect any wrongdoing, it's generally the best to report such things.They have several different social media channels at the bottom of the website.

They have a contact form here: https://support.character.ai/hc/en-us/requests/new

And it looks like it's a US company, so they better comply with US law.

[–] Obelix@feddit.org 9 points 19 hours ago (1 children)

Do not complain to scummy companies, they will ignore you. Send messages to the media and police.

[–] hendrik@palaver.p3x.de 2 points 17 hours ago* (last edited 16 hours ago) (1 children)

I'd say do complain to companies first, at least to those based in a regular country, and only then blog about it. Also underlines your point if you write, I informed them but they didn't care.

I believe it's the other way around if it's really shady and/or crime involved and you suspect the company to sweep it under the carpet. So you'll want to inform the police first so they can gather evidence. But don't waste their resources with minor things. They have enough to do. And I think this one isn't cutting it yet, so I wouldn't add it to the workload of already overworked police.

Judging by what I've seen when talking to police and media, they often also lack interest or time to focus on some random things as long as there's bigger fish to fry... I've already reported a worse service (which was already in the news) to the internet office of the police, and nothing ever came of it. So that's sometimes not the solution either.

I think spreading some awareness is a good thing, so this post is warranted. But what I'd do in this specific case is take a screenshot and save the URL, in case I want to escalate things at a later date. But then start with a regular report to the company, as they seem to be a regular company registered in the USA. And then I'd wait 2 weeks before bothering other people.
If this was an image or video generator, I'd act differently and maybe go straight to the police. But it isn't.

[–] Grimtuck@lemmy.world 3 points 13 hours ago (1 children)

I disagree. This will only result in reactive moderation. If you want them to take this seriously and stop this before these bots go live then shame them on the internet. Don't think that they don't know what's going on on their own site. These websites profit from taking delayed action.

[–] hendrik@palaver.p3x.de 1 points 7 hours ago* (last edited 5 hours ago)

In an ideal world, yes. But we already have 500 journalistic articles about Character ai. Probably countless social media posts. And it's literally on their Wikipedia article. The likely outcome is nothing, the journalists won't even bother writing yet another acticle about the same thing. And at best we'll end up with the 501st article. All the while the content is still up unless someone listens to me and also reports it. What OP seems to have done. (I'm making up the numbers, but it's really a lot of different articles.)

I think what we need to do is sue them. And people already did, and they were forced to implement moderation. I think we now need to follow up on that, report content and make them follow up on it. I think that's a necessary step before the next article can be written or the next lawsuit starts. It's far from perfect. But it is how it is.

Of course, continue to spread awareness and write about it. Just be aware this likely has next to no impact on the world at this point in time. But yeah, it's difficult to do the right thing here. Your guess is as good as mine.

[–] ckmnstr@lemmy.world 3 points 19 hours ago

Thanks, will do!

[–] _druid@sh.itjust.works 2 points 19 hours ago

Okiepoke, c'mon man, not cool.