this post was submitted on 11 Aug 2023
279 points (91.2% liked)
Asklemmy
43948 readers
485 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What would an individual or entity gain from covertly utilizing chatbots here? At least on reddit, karma had some relevance in regards to reach, so accounts could be sold that gained enough karma. But no such system exists here. Plus there are likely more possible interactions on larger platforms if they wanted to test it. I mean so many posts here get zero comments to begin with. Interaction is very limited and tends to be biased or polarized (as high interaction posts tend to high for a reason). And when it comes down to it, Pascal's wager sort of comes into play. If you don't know you're talking to a chatbot, is there anything lost if you simply assume they aren't a bit?
Question: outside of karma posting requirements how did reddit users having more karma assist with a user's reach?
But... how does high karma help that ?
The account looks more real. A ten year reddit account with a bunch of real comments and 100k karma looks like a human being, so when they post something like "Razer mice have really gotten amazing over the past couple years! I have the new Naga and I couldn't live without it" on the PC gaming subreddit, there's a higher chance that looks like a real recommendation from a real human than a paid ad.
I don't think it does, having a flair in a particular sub lends more weight for that sub. I believe some individuals with high karma points tend to be more obnoxious because they don't care that people will downvote them, but I personally experienced only one (which could be just that specific individual.) There are other who wish for tools that'll screen out both low-karma users (spams, etc) and really-high-karma (100K+) users, presumably because of reasons along this line.
Humans often behave differently when they have coveted labels associated with them. Think celebrity, blue-birds, royalties, etc.
That's not the inverse of Pascal's Wager. "If p then q" has an inverse of "if not q then not p". Plus you need to take into account the premises of the argument. There's definitely a premise that if there is a god there is only one god. It doesn't hold up otherwise. So the inverse of "if there is a god, then living this way gets me a good afterlife" is "if I dont get an afterlife, there is no god." Which is still just fine. So there's no real logical fallacy. The only subjective component the cost of living such a way. If it costs you nothing, then the argument states you should definitely act as if there is a god. If it costs a lot, then it becomes less obvious. The Wager is based off the idea that you don't lose much by acting in accordance with the required lifestyle. It does ignore the concept that if there is a god, said god would likely have access to your thoughts and make it all moot.
That being said, I'm still an atheist. But my point is that if I don't know its a robot, I get the same result. Malicious actors can deploy bots, but there are also just as many malicious actors acting as trolls. So worrying about future unhappiness isn't worth it in my opinion.
Again, it's not belief in something else. It's not believing in God. Belief in "not-God" or "anti-God" is logically a different concept entirely. It's simply belief versus not believing. The major flaw is that it only works if there's only one God and it's the God that aligns with whatever belief system you're claiming said God wants you to follow. If you use the premise of "if there is a god, it's the Christian god", and the premise "it costs very little to live a life according to God", then the two loses are "I acted as if there was a god, lost a little bit of leisure, but no payoff" vs "I acted as if there was no god and now I'm doomed to eternal damnation." The problem isn't the logic. It's the premises that are fallacious.
Except that isn't a converse. It's relying on the false premise of another god. The inverse of god existing is God not existing. You're just making up a new proof that isn't the converse, inverse, or contrapositive. You're literally just saying what happens if there's a different god.
Pascal's wager suffers from faulty premise, not logical inconsistency. You're just doing a whole bunch of nonsense and extra work to say the same thing.
Yes, but your "not god" is simply a different deity. So it's a different proof. We're back to the faulty premise.
"God X" and "God Y" are equally valid assertions which violates the premise. I don't care that you call it "anti-God" since you're making it equivalent to a god and able to offer eternal rewards. Your entire logical argument is absurd. Pascal's wager is famously known for suffering from false premise of finite loss and infinite reward. All of the absurdity of the wager comes from the premises which you continually ignore.
Faulty premise isn't a logical fallacy though. That's my whole problem here. False premise doesn't mean the logic is invalid. This is an important concept in formal logic. The argument is fine. The foundation is not. You're just now agreeing with what I originally said.
Probably because it is a clear cut example of a logical fallacy. The whole thing was an exercise in question begging via it's unstated assumptions.
The wager isn't a fallacy. It suffers from false premise. The logical validity isn't the problem. It's internally consistent. And it didn't beg the question at all in the argument. You can I guess sort of claim the premise does? But not really.