this post was submitted on 27 Apr 2026
1158 points (98.6% liked)
Technology
84200 readers
4031 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is absolutely hilarious. "AI" users getting what they deserve chef's kiss
This is what happens when there is a new technology and companies are run by commerce grads, not scientist or engineers that understand the technology.
AI has good therapeutic uses, particularly for disabled or impoverished people who may not be able to access mainstream therapy
Please don't recommend AI for therapeutic uses, it's only been optimised to keep the user engaged and pushed many people into psychosis. Just search for "ai psychosis" on your favourite search engine and you'll get a ton of reports on how LLMs validate vulnerable people's delusions, sometimes pushing them all the way into murder and/or suicide.
This is a post about Claude. It's better than chatgpt and the sad thing is, it's the best option a lot of people have.
This is a post about heroin. It's better than oxy, and the sad thing is, it's the best option a lot of people have.
I actually don't know much about drugs, but you get the point, you should not be trying to "self medicate" for psychological pain from unregulated "street" vendors.
And I'd like independent studies to prove it's better than nothing before I'd recommend it to replace nothing. Especially when self guided mental health solutions such as meditation exist.
I don't see how nothing would be better someone using a good quality AI to for example, ground them during a panic attack.
AI will not ground you, it will reinforce what you already believe. that's why it's very dangerous for "therapeutic" use.
IME it depends which one
Just FYI, there are experts in this thread telling you it doesn't depend which one.
Yes, some are worse than others. Yes, some have some trivial safeguards added for the worst known risks.
But no, none of them are remotely safe for use with self guided therapy.
As others have mentioned, anyone doing so would be much better off pirating or shoplifting the appropriate books, directly.
Responsible people using AI for expert knowledge always risk farm from the way the AI jumps immediately to the answer it thinks they want, ignoring all other available answers. :(
Because nothing doesn't run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it's consistently able to not do those things for a variety of people that's great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.
And yeah given llms' reputation for dealing with psychosis, delusions, and suicidality, I don't trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.
IME it depends which one
That's fair, but given the way the technology actually works, I stand by my position that there is a very real potential for harm and safer alternatives that are similarly accessible. If studies show it's safe and helpful that's cool, but at this moment I'd strongly discourage any loved one who's interested in using an llm for this purpose and would instead point them towards other resources.
Yes I see your point, and agree that actual therapy is dramatically safer and better
Yeah and I guess my point is that so is pirating a cbt workbook and trying it yourself
I was about to reply that you forgot your /s, but then I refreshed my browser tab.
Like... there are multiple documented cases of sycophantic llms confirming people's delusions. 'ai psychosis' is just a short way of saying the AI is a non-funny-improv-comedian and will always "yes and" your prompt.
prompt: "I feel bad and think I need to kill myself"
response: "You're totally right, here's some help in how to do that..."
prompt: "I have this great idea: If we eat broken glass, we'll be healthier"
response: "Absolutely. Glass is made out of silicon dioxide, which has some health benefits if consumed in small amounts."
prompt: "You told me to see a doctor, but I don't want to"
response: "I'm sorry, you're right. You don't need to see a doctor. Your chest pain is perfectly normal."
My examples are more physical things instead of mental because the consequence is more clear, but the same issue exists for mental health.
Using an AI for therapy or medical advice is a stupid, dumb, very bad idea. It will at best magnify problems.
Suggesting that disabled or impoverished people use it because they can't access actual mental healthcare seems equivalent to eugenics to me.
That I will agree with. Maybe we should spend a small fraction of the money going into data centers on providing healthcare instead.
It depends which one you use and how you use it. They're not all chatgpt quality.
I hope you are not seriously advocating using the lying machine for therapy. You would get more value talking to a finger puppet.
It depends which one you use and how it's used. Plus it's a developing field. Bear in mind my comment was in response to someone saying AI users were "getting what they deserve".
No. Chatbots are machines built by billionaires with the agenda of making money. They litterally design these bots (even the therapeutic ones) to be sycophantic to the point they tell people anything to keep them chatting longer. To the point some of their users lose touch with reality. How many cases do we need of a chatbots helping a teenager plan and succeed at a suicide? Altruists did not design these machines. Even with a human therapist we have to watch for the landmines of their personal agendas. That's a thousand times worse for machines that have no humanity, are capable of LIES, and have secret unwritten priorites written into their code by rich sociopathic creators. If facebook taught us anything it should be that if something is free on the internet it's not because we are the customers.
Also DO NOT TELL ALL YOUR DEEPEST DARKEST SECRETS TO CHATBOTS! They aren't required by any legal bodies to protect that information! OMFG
People that need therapy are one of the groups that should be kept away from ai as fr as possible.
AIs are yes-man, they agree with most of what you say. You really think its a good idea to reinforce the bad worldview or sense of self someone that desperately needs therapy most likely has.
It depends which one people use and how it's used. Please bear in mind my comment was in response to someone saying about AI users getting "what they deserve". Do you think that comment should be applied to disabled people who can't access any other form of therapy?
It really doesn't. Pretty much all models so far loose their guardrails once you are deep enough in the conversation. There were multiple news articles about ai giving someone the go ahead to off themselves.
No matter which way you use it its bad. If you ask it for tips, you are essentially asking the average redditor for mental health advice. If you use it for conversations, you are forming a parasocial relationship with an AI that will constantly get things wrong you told it about before while reinforcing whatever worldview you have. The only thing that would slightly help is supervision by a human, but that would make the whole exercise redundant.
If they were desperate enough to be forced into using AI, then that above comment wouldn't apply to them, but instead to the ones that are responsible for the broken system in the first place.
I see it differently but thanks for chatting with me
JFC...there are already disclaimers on this. "For Entertainment Purposes Only".
Same excuse Fox News used.
impoverished people need stable income and subsidized ration to reduce their burden. Not LLM subscriptions.
You can't use therapy to escape hunger.