this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I made LLM therapist by fine-tuning an open source model using custom written and collected sessions in Cognitive Behavioral Therapy (CBT). Data contains conversations that illustrate how to employ CBT techniques, including cognitive restructuring and mindfulness.

It is mostly focused on asking insightful questions. Note: It is not production ready product. I am testing it and gathering feedback.

You can access it here: https://poe.com/PsychologistLuna

Sorry that it is on Poe but this way it was much faster than making my own mobile friendly website.

Since it's LLM, it is prone to hallucinate or give responses that might be perceived as rude. Please use it with caution.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] tsundoku-16@alien.top 1 points 10 months ago (1 children)

I agree with the points you are raising. This is meant to be like a companion (think of Replika) that you can talk, to sort out your feelings and learn mindfulness techniques. It is not meant for somebody who is struggling with serious mental health issues. I already have a description that mentions several times that it is generative AI and should be used with caution. I think I should clarify that further in bot description and in its naming, not to mislead people.

I have thought about all the dangers that it poses for a long time but I think if built and used responsibly, it has the potential to help huge number of people. Especially right now, because there is a severe shortage of mental health professionals and not many people can afford it. I also didn't have great personal experience with human therapists, most of them put me on a lot of medication that ended up not helping at all.

As for the model size, this is only for getting some feedback. My current inference server cannot handle big number of users. Even if I intend to have a bigger release, I will not be doing it without much more capable model, extensive testing, more safety guardrails and RLHF.

[โ€“] Purple-Ad-3492@alien.top 1 points 10 months ago

if you're expecting people in need of therapy to use this with legitimate reason "responsibly" and "with caution", it essentially defeats the purpose of the bot which was intentionally created to be a reliable, self-policed companion targeting a user-base that vary on a wide spectrum of interpretation to what even constitutes a normalized sense of responsibly and caution..

way too many factors at play to mitigate in terms of liability, as others have said. take this resource as an example for a bot created to help those with eating disorders: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea