if you're expecting people in need of therapy to use this with legitimate reason "responsibly" and "with caution", it essentially defeats the purpose of the bot which was intentionally created to be a reliable, self-policed companion targeting a user-base that vary on a wide spectrum of interpretation to what even constitutes a normalized sense of responsibly and caution..
if you're expecting people in need of therapy to use this with legitimate reason "responsibly" and "with caution", it essentially defeats the purpose of the bot which was intentionally created to be a reliable, self-policed companion targeting a user-base that vary on a wide spectrum of interpretation to what even constitutes a normalized sense of responsibly and caution..
way too many factors at play to mitigate in terms of liability, as others have said. take this resource as an example for a bot created to help those with eating disorders: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea