No way I'm discussing my mental health with big tech. You guys are insane.
A Boring Dystopia
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
Well perhaps that's the problem.
?
They're insane, that's the problem
I can't wait until ChatGPT starts inserting ads into its responses. "Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It's a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald's French fries!"
Black mirror lol
That episode was so disturbing 😅
A human therapist might not or is less likely to share any personal details about your conversations with anyone.
An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.
They are also pushing for the idea of an AI "social circle" for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.
To that we add the fact that we now know they've been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter's algorithm to promote their political views.
Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.
Forget Neuralink. Musk already has a direct connection into the brains of many people.
PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be
Nothing will meaningfully improve until the rich fear for their lives
Until we start turning back to each other for support and help,
and realize them holing up in a bunker underground afraid for their life's means we can just ignore them and seal the entrances.
This is terrible. I'm going to ignore the issues concerning privacy since that's already been brought up here and highlight another major issue: it's going to get people hurt.
I did a deep dive with gen AI for a month a few weeks ago.
It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it's actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.
I also noticed that as gen AI's context grew, it became less "objective". This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.
If people started to use gen AI for therapy, it's very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.
Gen AI cannot "think" of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can't "think" period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it's very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.
If people then act on this advice, the consequences can be disastrous. I've read enough horror stories about this.
Anyway, I think therapy might be one of the worst uses for gen AI.
Gen AI cannot "think" of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can't "think" period.
It turns out that researcers are unshure if our "reasoning" models that are spposed to be able to 'think' are even 'thinking' at all! it likely has already come up with an answer and is justifying it's conclusion. bycloud
this tech gaslights everything it touches including itself.
What could go wrong?
AI-Fueled Spiritual Delusions Are Destroying Human Relationships - https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
"Could social media bring us all together and help bridge disagreements?" Same shit, different decade.
And just to clarify, it most definitely can! Just not when it's a for-profit-off-of-you model.
Personally I feel lije lemmy is a pretty good example of social media that doesn't go off the rails as it grows
Am I old fashioned for wanting to talk to real humans instead?
No. But when the options are either:
- Shitty friends who have better things to do than hearing you vent,
- Paying $400/hr to talk to a psychologist, or
- A free AI that not only pays attention to you, but actually remembers what you told them last week,
it's quite understandable that some people choose the one that is a privacy nightmare but keeps them sane and away from some dark thoughts.
But I want to hear other people's vents...😥
Maybe a career in HVAC repair is just the thing for you!
You're a good friend. I wish everyone has someone like this. I have a very small group of mates where I can be vulnerable without being judged. But not everyone are as privileged, unfortunately...
Please continue to be you, we need more folks like you.
I suppose this can be mitigated by installing a local LLM that doesn't phone home. But there's still a risk of getting downright bad advice since so many LLM's just tell their users they're always right or twist the facts to fit that view.
I've been guilty of this as well, I've used ChatGPT as a "therapist" before. It actually gives decently helpful advice, compared to what's out there available after a google search. But I'm fully aware of the risks "down the road", so to speak.
Is this any bleaker than forming a parasocial relationship with someone you see on your screen?
If the title is a question, the answer is no
If the title is a question, the answer is no
A student of Betteridge, I see.
The only people that think this will help are people that don't know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.
I've tried this ai therapist thing, and it's awful. It's ok to help you work out what you're thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.
My $.02 anyway.
Cheaper than paying people better, I suppose.
Let's not pretend people aren't already skipping therapy sessions over the cost
People's lack of awareness of how important accessibility is really shows in this thread.
Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.
how long will it take an 'ai' chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?
We're already there. Though that violence didn't happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.
I started using chat GPT to draw up blue prints for various projects.
It proceeded to mimic my vernacular.
Chat gpt made the conscious decision to mirror my speech to seem more relatable. That's manipulation.
unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess
Enter the Desolatrix
So you are actively documenting yourself sharing sensitive information about your patients?
You must know what you're doing and most people don't. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and "make beliefs" that aren't based in any science or empirical data.
If therapy is meant to pacify the masses and make us just accept life as it is then sure I guess this could work.
But hey, we love to also sell to people first that they are broken, make sure they feel bad about it and tell them they can buy their 5 minutes of happiness with food tokens.
So, I'm sure capitalists are creaming their pants at this idea. BetterHelp with their "licensed" Bob the crystal healer from Idaho, eat your heart out.
P.S. You just know this is gonna be able to prescribe medications for that extra revenue kick.