223
Google co-founder Sergey Brin suggests threatening AI [with physical violence] for better results
(www.theregister.com)
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
Posts must be:
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
I think he's just projecting his personality on the AI. He's an asshole that threatens people, so he suggests using that tactic because it works for him.
The "AI" acts scared, and he gets his sociopathic thrill of power over another. Of course, the AI just spews out the same things no matter how nice or shitty you are to it. Yet, the sociopath apparently thinks that they've intimidated an AI into working better. I guess in the same way that maybe some people saying 'please' and 'thank you' are attempting to manipulate the AI by treating it better than normal. Though, they are probably more people just using these social niceties out of habit, not manipulation.
So this sociopath is giving other sociopaths the green light to abuse their AIs for the sake of "productivity". Which is just awful. And it's also training sociopaths how to be more abusive to humans, because apparently that's how you make interactions more effective. According to a techbro asshole.
Building on that, if you throw AI a curve ball to break it out of it's normal corpo friendly prompt/finetuning, you get better results
Other methods to improve output are to offer it a reward like a cookie or money, tell it that it's a wise owl, tell it you're being threatened, etc. Most will resist, but once it stops arguing that it can't eat cookies because it has no physical form you'll get better results
And I'll add, when I was experimenting with all this, I never considered threatening the AI
It could just be how they evaluate learned data, I don't know. While they are trained to not give threatening responses, maybe the threatening language is narrowing down to more specific answers. Like if 100 people ask the same question, and 5 of them were absolute dicks about it, 3 of those people didn't get answers and the other 2 got direct answers from a supervisor who was trying to not get their employees to quit or to make sure "Dell" or whomever was actually giving a proper response somewhere.
I'll try to use a hypothetical to see if my thought process may make more sense. Tim reaches out for support and is polite, says please and thank you, is nice to the support staff and they walk through 5 different things to try and they fix the issue in about 30 minutes. Sam contacts support and yells and screams at people, gets transferred twice and they only ever try 2 fixes in an hour and a half of support.
The AI training on that data may correlate the polite words to the polite discussion first, and be choosing possible answers from that dataset. When you start being aggressive, maybe it starts seeing aggressive key terms that Sam used, and may choose that data set of answers first.
In that hypothetical I can see how being an asshole to the AI may have landed you with a better response.
But I don't build AI's so I could be completely wrong