This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Today, OpenAI released a new update that will put more mental health guardrails on ChatGPT. I'd been hearing about chatbots inducing psychosis and I'm assuming it was a response to that. But looking into the topic more I'm astounded how much of average people's mental health is becoming tied to these chatbots. Not just people with severe mental illness but perhaps even the majority of all people that use chatbots.
A recent survey shows:
49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.
73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.
A quick browse of reddit down this particular rabbithole quickly makes me realize how many people are already talking to chatbots as a friend/therapist. My impression is that its similar to the early days of online dating. People are sort of embarrassed to admit they has an AI friend, but with numbers increasing rapidly the younger you go. I've seen numbers between 10 and 50(!) percent of young people have used AI for companionship.
In retrospect, it would be shocking if AI therapy didn't take off. Probably the biggest barrier to getting therapy is cost and availability. Chatbots are available 24/7, essentially free, and will never judge you. The rate of mental illness is rising particularly among young people so the demand is there. But it's not just that, the idea of therapy is ingrained into today's culture. There's a sense that everyone should get therapy, who among us is truly mentally healthy, etc. I could easily see it becoming as ubiquitous as online dating is today.
I admit personally I'm relatively skeptical of therapy in general. As I understand it, it doesn't really matter what therapeutic method you use the result are about the same, so probably most of the benefit comes from just having someone you can vent to who is empathetic, and won't judge or get bored. If that's the case then AI therapy is probably as good or better than human therapy for cases that are not severe. On reddit I see a lot of comments that say that AI therapy has helped them more than years of human therapy and I can believe that.
So if AI therapy is helping so many people is that a good thing? I see a lot of parallels between AI therapy and AGI's alignment problem. I believe people when they say they went to therapy and they report feeling better. I'm not really confident that they came out with a more accurate view of reality. Recently I want down another tangentially related rabbithole about an online therapist that goes by the name of Dr. K, who has publicly streamed their therapy sessions (for legal reasons he doesn't actually call them therapy sessions). The thing that struck me is just how vulnerable a state of mind people are in during therapy, and the very subtle way that assumptions about reality can be pushed on them.
So if you consider how impressionable people are when receiving therapy, and how its becoming increasingly common for adults to use chatbots for therapy, and how it's becoming increasingly common for kids to grow up with chatbots as friends, then it really makes the potential impact of subtle value assumptions in these models loom large.
I don't really want to write an entire novel on research and stuff but the short version is that medical research is hard and research on anything that involves people and society is also hard. This results in seemingly low effect sizes for therapy but that shit really does work. It's not necessarily going to work for every patient, situation, (and critically) or therapist.
Part of the problem is that we have a large number of low skill therapists, incorrect patient therapist/modality matches, incorrect indications, and the whole therapy culture thing.
CBT and DBT have excellent evidence bases for instance and are meant to be highly structured with clear end points. We also have a pretty good understanding of what patients and situations should use each of those therapy modalities.
PTSD treatment is done through therapy and can be quite effective.
For many common conditions you very much need both medication and therapy (and only using medication leading to poor efficacy is the other side of the psychiatric complaint coin).
However most presentations of therapy you see on the internet are people getting matched to a random low skill therapist they don't vibe with and indefinitely engaged in a process that is never explained to them which therefore feels like just venting.
That's not the real thing, in the same way paying your friend who is a college athlete to help you isn't the same as getting actual PT.
However low skill therapy is probably better to have around for society than nothing and high skill therapy can be extremely expensive so we are stuck with this.
The preliminary research seems pretty good but a lot of psychiatrists are essentially betting their careers that some of the usual business is happening: motivated research looking for the "right" conclusion, poor measures of improvement (patients may feel subjectively supported but don't have an improvement in functional status), and so on. Every time The New Thing comes out it looks great initially and then is found to be ass or a bit more sketchy.
The lack of existential fulfillment provided by AI, overly glazing behavior, and a surplus of Cluster-B users and the psychotic receiving delusion validation will lead to problems including likely a good number of patients who may end up actually dangerous and violent.
If the tools don't improve drastically quickly (which they probably will be) I'd expect a major terror event then STRONG guard rails.
You see some reports on social media of doctors finding their patients encouraged to do some really bad shit by a misfiring chatbot.
What is a good way to learn more about our understanding of best practices of when to apply which flavor of therapy?
Usually I unhelpfully reply "do a lit review!!!" to these sorts of questions but after a quick look myself I don't think it would be that easy - "become an expert in therapy" is probably more accurate but is as about as unhelpful as it is predictable.
The challenging bit is that therapy (especially CBT) is "indicated" for about everything but that doesn't tell you which types of patients will benefit most from which types.
I'm not an expert in this by any means.
It is worth noting that "real" therapy (or many types of popular therapy) is often less ooey-gooey emotional exploration and more resembles socratic questioning or an outright class (in the case of CBT which is driven by "homework").
I do have a family member who is not in psychology or psychiatry (or medicine) who listens to psychiatric podcasts and a few of them dig into this explicitly, you could probably do that if you really wanted to develop a knowledge base.
Some modalities are more specific however, DBT is for Borderline Personality Disorder and people struggling with cluster-b coping mechanisms as part of their pathology. It can work quite well for this.
Classically (especially for any U.S. medical students reading!) the answer to any board question at the Shelf or Step 1-3 level that includes CBT is going to be CBT unless it's DBT for Borderline.
I had considered adding the caveat of "I am happy to do my own research, but if you have any pointers of where to start that would be much appreciated" but then got distracted and just clicked "Comment".
I appreciate your thoughtful answer
You can actually get a good explanation by asking an LLM (preferably a good one) Something like Deep Research will at least give you the hooks to get into. I'm not saying this just because I'm lazy, but because I expect you to get a good answer out of it.
More options
Context Copy link
I suspect if you know a PsyD or other actual psychotherapist they might have more helpful advice but my quick lit review didn't turn up anything useful.
I do generally suggest that everyone in medicine read Nancy McWilliams Psychoanalytic Diagnosis for an understanding of personality structure since it has broad application to life and general medicine still needs to know how to deal with personality dysfunction.
Some of the chapters are still fun to read with zero background (ex: Anti-Social).
It won't answer your specific question directly but will provide a lot of context and peck at it a bit.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link