this post was submitted on 09 Jun 2025
181 points (97.9% liked)

Technology

71136 readers
3166 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

you are viewing a single comment's thread
view the rest of the comments
[–] triptrapper@lemmy.world 6 points 9 hours ago (2 children)

I'm a real-life human therapist (honest!) and while I don't think it's a substitute for talking to a real person, I'm happy that some people get some benefit from chatbots. I had a client who used Rosebud Journal in between sessions and found it helpful. I tried out Rosebud myself and I was very impressed with how it replicated the basics like reflective listening and validation. It was even able to reframe my input using various therapy models when I requested it. I didn't use it for long because I'm not big on journaling, but I wouldn't dismiss it completely as a tool.

[–] FerretyFever0@fedia.io 4 points 8 hours ago (1 children)

I'm not worried about what it gets right, I'm worried about what it gets wrong. If it helps people, then that's a good thing. They don't have true empathy, and the user knows that. Sometimes, human experience is more valuable than the technical psychological knowledge imo. Chatgpt has never experienced the death of a family member, been broken up with, bullied, anything. I don't really expect it or trust it to properly help anyone with any personal issues or dilemmas. It's a cold, uncaring machine, and as its knowledge is probably rather flawed, could even teach dangerous ideas to users. I especially don't trust a company like Meta to be doing this thouroughly and to truly help their patients. It's cool if it works, but dangerous if it doesn't.

[–] triptrapper@lemmy.world 2 points 4 hours ago

Oh I don't at all support what Meta has done, and I don't trust any company not to harm and exploit users. I was responding to your comment by saying that talking to a chatbot doesn't necessarily indicate that someone has "bigger problems." If they're not in a crisis, and they have reasonable expectations for the chatbot, I can see how it could be a helpful tool. If someone doesn't have access to a real therapist, and a chatbot helps them feel better in the meantime, I'm not going to gatekeep that experience.

[–] Ulrich@feddit.org -5 points 7 hours ago (1 children)

How do you feel about all the kids committing suicide after interacting with AI?

[–] charade_you_are@sh.itjust.works 2 points 6 hours ago (1 children)

I don't know about the OP, but that would be fucking fantastic! What a bullshit question

[–] Ulrich@feddit.org 1 points 6 hours ago* (last edited 6 hours ago)

It is a bullshit question in reply to a bullshit statement. OP was not involved.