This basically means that OpenAI has a trained model that is able to change political viewpoints. Which also means that this model is available to the Trump administration.
News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
:::spoiler Rule 10- Majority of bots aren't allowed to participate here.
I've been using AI as a utility to analyze discussions with other users and to determine who has the objectively better argument. I have it compare fallacies used in both quantity and severity.
Particularly with Trump supporters it's no surprise the ratio is usually 3 or 4:1.
It could be used in this context against AI, too, because the directive isn't to persuade but to analyze logic.
All that means is the left is primarily the populous option, or at least whoever sifted through the training data is predominantly left wing. AI is a statics machine and it will give you the answers it mostly received in training. If the most popular answer fed into it's training was right wing like in your other answers it would show up more right biased, in other words AI is the ultimate average being and as we all know the average man can be pretty dumb.
Feel free to run this through your AI to evaluate the truthfulness.
All that means is the left is primarily the populous option
Does it? This is in fact just as speculative; after all, it could also mean that the left is primarily the more logically-accurate sample. Alternatively it could mean the data sets from linguistics and critical-thinking books are inherently antithetical to the "poorly educated" conservatives (Trump's words, not mine). Besides, I venture a guess that if this data is pulled from the vastness of the internet that includes the likes of 4chan, facebook, breitbart, fox news, and so on... Then I'm going to take a wild guess and say that the conservative sets are pretty well represented.
Not that they're perfect, but yes, it all depends on the data sets through which they're evaluated. Nevertheless, logic is logic and one of the easier things to parse for these machine-learning tools. Imperfect though they may, if they can code a Python application, then they can recognize that 2 + 2 != 5 just the same within linguistics.
Not the end-all solution, of course, but merely a diagnostic utility in the toolbox to help thwart fallacious thinking and gaslighting. This Software Engineer shall continue to use it absent of a better argument, thank you very much.
Or any administration for that matter..
that is able to
Have you and I been reading the same /r/changemyview?
More seriously, the article says nothing of actual efficacy. I don’t think most people are particularly receptive to having their mind changed through direct rhetoric on the internet, it absolutely happens but it’s not nearly as concerning as other polarizing social media behaviors.
I bet this is just the tip of the iceberg
This is exactly what Cambridge analytica / Academi was proven to be doing as early as 2015.
You don't need LLM's to conduct psychological warfare. It was obvious that surveillance capitalism + big data analytics were being used for real time propaganda dissemination back then.
Do you, ChatGPT? 😁
OpenAI used the subreddit, r/ChangeMyView, to create a test for measuring the persuasive abilities of its AI reasoning models. The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new “reasoning” model, o3-mini, on Friday.
Notably, it does not appear that they have posted the AI responses, just used the posts as input for internal processes.