this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I made LLM therapist by fine-tuning an open source model using custom written and collected sessions in Cognitive Behavioral Therapy (CBT). Data contains conversations that illustrate how to employ CBT techniques, including cognitive restructuring and mindfulness.

It is mostly focused on asking insightful questions. Note: It is not production ready product. I am testing it and gathering feedback.

You can access it here: https://poe.com/PsychologistLuna

Sorry that it is on Poe but this way it was much faster than making my own mobile friendly website.

Since it's LLM, it is prone to hallucinate or give responses that might be perceived as rude. Please use it with caution.

top 23 comments
sorted by: hot top controversial new old
[–] fvillena@alien.top 1 points 10 months ago

Can we take a look at the fine tuning dataset?

[–] Infamous-Bank-7739@alien.top 1 points 10 months ago (1 children)

Did the open source model license agreement allow to re-use the model without indication of what model was used to finetune?

[–] tsundoku-16@alien.top 1 points 10 months ago

I updated my post. I am using Llama-2-13B. I was not really sure whether I am required to do so with llama-2 license but just in case.

[–] gunbladezero@alien.top 1 points 10 months ago (1 children)

Thanks, but my healthcare plan only covers ELIZA

[–] _koenig_@alien.top 1 points 10 months ago (1 children)
[–] smorga@alien.top 1 points 10 months ago

Paging Dr. Sbaitso

[–] 3DHydroPrints@alien.top 1 points 10 months ago (2 children)

I hope you fully understand the consequences that may can occur by releasing such a model. A relatively small model with (what I assume) a relatively small non real world dataset, used by a mentally unstable person can lead to bad things IMO. That's why companies are very careful with this type of application. Huge liability and ethical issue

[–] tsundoku-16@alien.top 1 points 10 months ago (1 children)

I agree with the points you are raising. This is meant to be like a companion (think of Replika) that you can talk, to sort out your feelings and learn mindfulness techniques. It is not meant for somebody who is struggling with serious mental health issues. I already have a description that mentions several times that it is generative AI and should be used with caution. I think I should clarify that further in bot description and in its naming, not to mislead people.

I have thought about all the dangers that it poses for a long time but I think if built and used responsibly, it has the potential to help huge number of people. Especially right now, because there is a severe shortage of mental health professionals and not many people can afford it. I also didn't have great personal experience with human therapists, most of them put me on a lot of medication that ended up not helping at all.

As for the model size, this is only for getting some feedback. My current inference server cannot handle big number of users. Even if I intend to have a bigger release, I will not be doing it without much more capable model, extensive testing, more safety guardrails and RLHF.

[–] Purple-Ad-3492@alien.top 1 points 10 months ago

if you're expecting people in need of therapy to use this with legitimate reason "responsibly" and "with caution", it essentially defeats the purpose of the bot which was intentionally created to be a reliable, self-policed companion targeting a user-base that vary on a wide spectrum of interpretation to what even constitutes a normalized sense of responsibly and caution..

way too many factors at play to mitigate in terms of liability, as others have said. take this resource as an example for a bot created to help those with eating disorders: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea

[–] Ronny_Jotten@alien.top 1 points 10 months ago (1 children)

How is it all that different from writing a self-help book though, in a legal sense? Why is there any bigger liability and ethical issue? There are millions of such books and articles published, full of all sorts of nonsense. A mentally unstable person might follow one and experience a bad outcome. No reason to stop releasing books though. A simple disclaimer seems to suffice. I understand that the experience of an LLM is not the same thing as reading a book, but it is in a sense just indexing and summarizing many texts in an interactive way. Why do you think it would be treated differently under the law, are there specific laws that apply to LLMs but not to books?

[–] weaponized_lazyness@alien.top 1 points 10 months ago

An easy difference is the data: you hardly give any personal data when buying a book, but you would have to expose your deepest secrets to this system. If the system is trained on interactions with patients, it may even leak this data through responses in the future.

[–] BigBayesian@alien.top 1 points 10 months ago (2 children)

Feedback: as others have noted, unless you live in a country where therapy isn't regulated, this is a ticking legal time bomb. If you think no one at a company like Better Help has thought about building this, you're wrong. The issues are:

  • Where's legal liability when someone self-harms?
  • Where does your training set come from? Existing therapy log datasets that don't mention the training of LLMs as a possible use case in the legal agreement that clients signed may not be usable in the way you've used them.
  • What's your core hypothesis? That generating therapist-language has therapeutic value? Remember that the whole profession is about indirectly developing a model of what's going on in someone else's head, and then leading them to a conclusion based on that the therapist thinks may help them (this is reductionist and overly simplistic, but I think the point holds). This level of modeling and indirection seems poorly suited to a language-generator.
[–] synthphreak@alien.top 1 points 10 months ago
print("I understand. And how did that make you feel?")
[–] Appropriate_Ant_4629@alien.top 1 points 10 months ago

But think of the upside.

With a lack of well enforced privacy policies, the data such systems can mine for

This project could be the next FTX + Theranos.

[–] rejectedlesbian@alien.top 1 points 10 months ago

I would just LOVE to see the technical details.

[–] Distinct-Target7503@alien.top 1 points 10 months ago

Where is the LLM hosted?

[–] Teomaninan@alien.top 1 points 10 months ago

You look lonely, i can fix that.

[–] CVxTz@alien.top 1 points 10 months ago (1 children)

Legally and ethically questionable to automate healthcare this way.

[–] Worish@alien.top 1 points 10 months ago

I don't agree. As a tool used by the individual, there are few ethics complaints.

[–] BornAgainBlue@alien.top 1 points 10 months ago

I already built therapist bots, fun times.

[–] Duke_Koch@alien.top 1 points 10 months ago

Tried it out. It’s pretty good, but the main flaw I see is that it gets stuck in a loop of asking the same or similar questions rather than going deeper in the conversation.

[–] Own_Quality_5321@alien.top 1 points 10 months ago

Where and how did you get the training data?

[–] physicianmusician@alien.top 1 points 10 months ago

Sweet...can you guys compare it to this quick and easy one I made with custom GPT. Tried to oversell it on twitter, though the instructions I used were very detailed and were derived from clinical experience:

https://twitter.com/pianozack/status/1722855484951240904