Fake/bot accounts have always existed. How many times has a "YouTuber" ran a "giveaway" in their comments section?
Fediverse
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
Did anyone ever claim that the Fediverse is somehow a solution for the bot/fake vote or even brigading problem?
I think the point is that the Fediverse is severely limited by this vulnerability. It's not supposed to solve that specific problem, but that problem might need to be addressed if we want the Fediverse to be able to do what we want it to do (put the power back in the hands of the users)
I wonder if it's possible ...and not overly undesirable... to have your instance essentially put an import tax on other instances' votes. On the one hand, it's a dangerous direction for a free and equal internet; but on the other, it's a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users' feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.
I‘m not a fan of up- and downvotes, also but not only for the aforementioned reasons. Classic forums ran fine without any of it.
Classic forums still exist.
Voting does allow the cream to rise to the top, which is why reddit was much better than a forum.
Honestly, I think part of the problem is that companies don't have an incentive to fight bots or spam: higher numbers of users and engagement make them look better to investors and advertisers.
I don't think it's that difficult of a problem to solve. It should be quite possible to detect patterns between real users and bots.
We will see how the fediverse handles it.
I keep thinking about this. The only reason for votes that a forum cant do, is filtering massive content quantities through an equally massive userbase to get pages of great and revolving posts. In a forum you can just filter with comments/hour and give free promotion to new posts.
I don't have experience with systems like this, but just as sort of a fusion of a lot of ideas I've read in this thread, could some sort of per-instance trust system work?
The more any instance interacts positively (posting, commenting, etc.) with main instance 'A,' that particular instance's reputation score gets bumped up on main instance A. Then, use that score with the ratio of votes from that instance to the total amount of votes in some function in order to determine the value of each vote cast.
This probably isn't coherent, but I just woke up, and I also have no idea what I'm talking about.
Something like that already happened on Mastodon! Admins got together and marked instances as "bad". They made a list. And after a few months, everything went back to normal. This kind of self organization is normal on the fediverse.
I would imagine this is the same with bans I imagine there will be a future reputation watchdog set of servers which might be used over this whole everyone follows the same modlog. The concept of trust everyone out of the gate seems a little naive
Here’s an idea: adjust the weights of votes by how predictable they are.
If account A always upvotes account B, those upvotes don’t count as much—not just because A is potentially a bot, but because A’s upvotes don’t tell us anything new.
If account C upvotes a post by account B, but there was no a priori reason to expect it to based on C’s past history, that upvote is more significant.
This could take into account not just the direct interactions between two accounts, but how other accounts interact with each of them, whether they’re part of larger groups that tend to vote similarly, etc.
Thank you for this. I'd upvote you, but you've already taken care of that.
This is something that will be hard to solve. You can't really effectively discern between a large instance with a lot of users, and instance with lot of fake users that's making them look like real users. Any kind of protection I can think of, for example based on the activity of the users, can be simply faked by the bot server.
The only solution I see is to just publish the vote% or vote counts per instance, since that's what the local server knows, and let us personally ban instances we don't recognize or care about, so their votes won't count in our feed.