Today, OpenAI released a new update that will put more mental health guardrails on ChatGPT. I'd been hearing about chatbots inducing psychosis and I'm assuming it was a response to that. But looking into the topic more I'm astounded how much of average people's mental health is becoming tied to these chatbots. Not just people with severe mental illness but perhaps even the majority of all people that use chatbots.
49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.
73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.
A quick browse of reddit down this particular rabbithole quickly makes me realize how many people are already talking to chatbots as a friend/therapist. My impression is that its similar to the early days of online dating. People are sort of embarrassed to admit they has an AI friend, but with numbers increasing rapidly the younger you go. I've seen numbers between 10 and 50(!) percent of young people have used AI for companionship.
In retrospect, it would be shocking if AI therapy didn't take off. Probably the biggest barrier to getting therapy is cost and availability. Chatbots are available 24/7, essentially free, and will never judge you. The rate of mental illness is rising particularly among young people so the demand is there. But it's not just that, the idea of therapy is ingrained into today's culture. There's a sense that everyone should get therapy, who among us is truly mentally healthy, etc. I could easily see it becoming as ubiquitous as online dating is today.
I admit personally I'm relatively skeptical of therapy in general. As I understand it, it doesn't really matter what therapeutic method you use the result are about the same, so probably most of the benefit comes from just having someone you can vent to who is empathetic, and won't judge or get bored. If that's the case then AI therapy is probably as good or better than human therapy for cases that are not severe. On reddit I see a lot of comments that say that AI therapy has helped them more than years of human therapy and I can believe that.
So if AI therapy is helping so many people is that a good thing? I see a lot of parallels between AI therapy and AGI's alignment problem. I believe people when they say they went to therapy and they report feeling better. I'm not really confident that they came out with a more accurate view of reality. Recently I want down another tangentially related rabbithole about an online therapist that goes by the name of Dr. K, who has publicly streamed their therapy sessions (for legal reasons he doesn't actually call them therapy sessions). The thing that struck me is just how vulnerable a state of mind people are in during therapy, and the very subtle way that assumptions about reality can be pushed on them.
So if you consider how impressionable people are when receiving therapy, and how its becoming increasingly common for adults to use chatbots for therapy, and how it's becoming increasingly common for kids to grow up with chatbots as friends, then it really makes the potential impact of subtle value assumptions in these models loom large.
Today, OpenAI released a new update that will put more mental health guardrails on ChatGPT. I'd been hearing about chatbots inducing psychosis and I'm assuming it was a response to that. But looking into the topic more I'm astounded how much of average people's mental health is becoming tied to these chatbots. Not just people with severe mental illness but perhaps even the majority of all people that use chatbots.
A recent survey shows:
49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.
73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.
A quick browse of reddit down this particular rabbithole quickly makes me realize how many people are already talking to chatbots as a friend/therapist. My impression is that its similar to the early days of online dating. People are sort of embarrassed to admit they has an AI friend, but with numbers increasing rapidly the younger you go. I've seen numbers between 10 and 50(!) percent of young people have used AI for companionship.
In retrospect, it would be shocking if AI therapy didn't take off. Probably the biggest barrier to getting therapy is cost and availability. Chatbots are available 24/7, essentially free, and will never judge you. The rate of mental illness is rising particularly among young people so the demand is there. But it's not just that, the idea of therapy is ingrained into today's culture. There's a sense that everyone should get therapy, who among us is truly mentally healthy, etc. I could easily see it becoming as ubiquitous as online dating is today.
I admit personally I'm relatively skeptical of therapy in general. As I understand it, it doesn't really matter what therapeutic method you use the result are about the same, so probably most of the benefit comes from just having someone you can vent to who is empathetic, and won't judge or get bored. If that's the case then AI therapy is probably as good or better than human therapy for cases that are not severe. On reddit I see a lot of comments that say that AI therapy has helped them more than years of human therapy and I can believe that.
So if AI therapy is helping so many people is that a good thing? I see a lot of parallels between AI therapy and AGI's alignment problem. I believe people when they say they went to therapy and they report feeling better. I'm not really confident that they came out with a more accurate view of reality. Recently I want down another tangentially related rabbithole about an online therapist that goes by the name of Dr. K, who has publicly streamed their therapy sessions (for legal reasons he doesn't actually call them therapy sessions). The thing that struck me is just how vulnerable a state of mind people are in during therapy, and the very subtle way that assumptions about reality can be pushed on them.
So if you consider how impressionable people are when receiving therapy, and how its becoming increasingly common for adults to use chatbots for therapy, and how it's becoming increasingly common for kids to grow up with chatbots as friends, then it really makes the potential impact of subtle value assumptions in these models loom large.
More options
Context Copy link