this post was submitted on 30 Jun 2025
113 points (95.9% liked)

Mental Health

5384 readers
300 users here now

Welcome

This is a safe place to discuss, vent, support, and share information about mental health, illness, and wellness.

Thank you for being here. We appreciate who you are today. Please show respect and empathy when making or replying to posts.

If you need someone to talk to, @therapygary@lemmy.blahaj.zone has kindly given his signal username to talk to: TherapyGary13.12

Rules

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

  1. No promoting paid services/products.
  2. Be kind and civil. No bigotry/prejudice either.
  3. No victim blaming. Nor giving incredibly simplistic solutions (i.e. You have ADHD? Just focus easier.)
  4. No encouraging suicide, no matter what. This includes telling someone to commit homicide as "dragging them down with you".
  5. Suicide note posts will be removed, and you will be reached out to in private.
  6. If you would like advice, mention the country you are in. (We will not assume the US as the default.)

If BRIEF mention of these topics is an important part of your post, please flag your post as NSFW and include a (trigger warning: suicide, self-harm, death, etc.)in the title so that other readers who may feel triggered can avoid it. Please also include a trigger warning on all comments mentioning these topics in a post that was not already tagged as such.

Partner Communities

To partner with our community and be included here, you are free to message the current moderators or comment on our pinned post.

Becoming a Mod

Some moderators are mental health professionals and some are not. All are carefully selected by the moderation team and will be actively monitoring posts and comments. If you are interested in joining the team, you can send a message to @fxomt@lemmy.dbzer0.com.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] WatDabney@lemmy.dbzer0.com 54 points 17 hours ago* (last edited 17 hours ago) (2 children)

I had never thought about any of this before, but it actually makes perfect sense.

By its nature, an LLM feeds back some statistically close approximation of what you expect to see, and the more you engage with it (which is to say, the more you refine your prompts for it) the closer it necessarily gets to precisely what you expect to see.

"He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Exactly. To an outside observer, that's likely what it would look like, because in some sense, that's exactly what it in fact is.

But to the person engaging with it, it's a revelation of the deep, secret, hidden truths that they always sort of suspected lurked at the heart of reality. Never mind that the LLM is just stringing together words and phrases most statistically likely to correspond with the prompts it's been given - to the person feeding it those prompts, it seems like, at long last, verification of what they've always suspected.

I can totally see how people could get sucked in by that

[–] frunch@lemmy.world 7 points 11 hours ago

As someone with a bipolar loved one, i can see exactly how this could feed into their delusions. It's always there...even if they ran out of people to blast with their wild, delusional ideas the chat bot can be there to listen and feed back. When everyone has stopped listening or begins avoiding them because the mentally ill person has gotten more forceful/assertive about their beliefs, the chatbot will still be there. The voice in their head now has a companion on screen. I never considered any of this before but I'm concerned where this can lead, especially given the examples in the article.

[–] PhilipTheBucket@ponder.cat 14 points 17 hours ago (1 children)
[–] WatDabney@lemmy.dbzer0.com 14 points 17 hours ago

Oh my god yes. The moment I read the headline, it all fell into place.

Yes - it's necessarily pretty much the exact same effect, because the LLM, like the mentalist, is taking cues from the input it gets and making connections and feeding back whatever is most (statistically) likely to be appropriately on-topic.

And exactly as with a mentalist, everything that approaches what they want to hear is going to get an encouraging response, and likely further prompts which serve to narrow it down even further, and make it even easier to tell them even more precisely just what they want to hear..

Wow...