But even if you don't believe them, it's got a 50% chance on a coin toss.
Honestly, it's hard to do that. Chrome's dominance means much of the internet has been designed for chrome. If you don't support the same features, people will complain that your browser is broken or sucks.
Myself, I used Firefox for the longest time before I eventually just got too annoyed with the umpteenth site not working correctly and switched to Chrome.
You can't aggregate them internally, anyway. You need to be able to know if someone already voted on something.
I think activitypub needs to be extended so that the likes and reduces only need to be sent to the host of the content, with federation then being told just the aggregate number. Then the only servers that need to know identity of votes are the host server (necessary to ensure nobody can multi vote) and optionally the server the user voted on (could just relay the information to the host server and not store it locally, but then it'd be harder to tell what you've already upvoted -- could use local storage but I think lots of people use social media on multiple devices).
Sometimes reporting technically covers the last one. But usually not. Not all subs have rules against bigotry, trolling, dog whistles, general assholery, etc. I strongly hold it's important that downvoting is an option to deal with these kinda things. It's a way to show everyone that the comment isn't acceptable.
Plus even when reporting is an option, it may not be fast enough. Can't really automate removals, either, as people will abuse that.
Arguably "disagree but acceptable" should just not upvote. In a certain sense, that's already a middle option.
That will would cause load for peertube that it otherwise would have though. I'm not sure how well it will be able to scale. Hosting videos is really expensive. If something is already on YouTube it may be best to just leave it there so as to not put all our weight on a new, untested product.
Yeah, it's the fallacy of letting perfect be the enemy of good. It can stop 90% of bots (made up that number) but because it doesn't stop 100%, it's not worth doing?
What's also weird is that the Lemmy dev was pushing for a different form of captcha that definitely doesn't stop bots either (just is more niche and requires bots to expend a frankly trivial amount of extra processing power)!
Same. I loved Chuck Norris jokes when I was younger, but it just became harder to enjoy them when learned what Chuck Norris was actually like.
The style of joke works better for people who are actually Chads. Like Keanu Reeves.
Who the hell wants whatever the alternative to stack overflow is?? I mean, what would that even be? Misleading Quora questions? Expertsexchange pages that give wrong answers and don't let you view it without an account? Microsoft help forums where nobody even answers the question and the thread is just people complaining about the lack of answers? Old school forums where denver_coder12 just replies to his own question with "I fixed it"?
The pre stack overflow internet sucked ass.
"has anyone from my server interacted or searched for the post by it's URL" is misleading. I struggled with this yesterday. Turns out you have to search in a very specific way.
In both kbin and Lemmy, you can't just go to the community's URL (which is utterly bizarre). You must search the full magazine name. In Lemmy, you weirdly need the !
in front when searching it to find it. In kbin, you don't need that, but you do need to search the magazine in the "neutral" search mode, not magazine search mode (lol wut?). Actually, in Lemmy you also have to use the "normal" search field and not the community search field.
And of course, both have a discovery issue. People want to be able to search a partial string like "hobby" without having to know what instance their community might be on or if the full name might be things like "hobby_discuss", etc. They should not need a separate tool to do this search. That's just a barrier to entry.
Anyway the whole thing is a usability barrier that needs to change. It also makes smaller instances actively harder to use, which is a bad incentive. We don't want people to experience small instances as "buggy" (even if it's working as intended).
Anyone currently trying to create a sub should have an account on every major instance and subscribe to their new sub to ensure it shows up in the search. And yes, that is just completely silly (and unscalable beyond the biggest instances).
Yeah. The unethical bots will just use scrapers, as they already do for the many websites that don't offer APIs. They're already violating ToS, so they don't care. Ethical ones won't have that option (at least not past the fairly low quota).
I know some phones had already did this, but I always liked the idea of support for using your phone as a TV remote. The phone has replaced so many pieces of hardware that it feels silly that TV remotes haven't been replaced yet.
I also specifically wish Chrome supported extensions on mobile. Firefox does it. Why can't the biggest browser do it?
I'm very skeptical that mCaptcha would actually work besides perhaps temporarily slowing bots down due to being niche. How expensive can you make it without hurting legitimate users? And how expensive does it need to be to discourage bots? Especially when purposefully designed bots can actually do the kinda math we're talking about in optimized software and hardware while legitimate users can't.