site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Today, OpenAI released a new update that will put more mental health guardrails on ChatGPT. I'd been hearing about chatbots inducing psychosis and I'm assuming it was a response to that. But looking into the topic more I'm astounded how much of average people's mental health is becoming tied to these chatbots. Not just people with severe mental illness but perhaps even the majority of all people that use chatbots.

A recent survey shows:

  • 49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.

  • 73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.

A quick browse of reddit down this particular rabbithole quickly makes me realize how many people are already talking to chatbots as a friend/therapist. My impression is that its similar to the early days of online dating. People are sort of embarrassed to admit they has an AI friend, but with numbers increasing rapidly the younger you go. I've seen numbers between 10 and 50(!) percent of young people have used AI for companionship.

In retrospect, it would be shocking if AI therapy didn't take off. Probably the biggest barrier to getting therapy is cost and availability. Chatbots are available 24/7, essentially free, and will never judge you. The rate of mental illness is rising particularly among young people so the demand is there. But it's not just that, the idea of therapy is ingrained into today's culture. There's a sense that everyone should get therapy, who among us is truly mentally healthy, etc. I could easily see it becoming as ubiquitous as online dating is today.

I admit personally I'm relatively skeptical of therapy in general. As I understand it, it doesn't really matter what therapeutic method you use the result are about the same, so probably most of the benefit comes from just having someone you can vent to who is empathetic, and won't judge or get bored. If that's the case then AI therapy is probably as good or better than human therapy for cases that are not severe. On reddit I see a lot of comments that say that AI therapy has helped them more than years of human therapy and I can believe that.

So if AI therapy is helping so many people is that a good thing? I see a lot of parallels between AI therapy and AGI's alignment problem. I believe people when they say they went to therapy and they report feeling better. I'm not really confident that they came out with a more accurate view of reality. Recently I want down another tangentially related rabbithole about an online therapist that goes by the name of Dr. K, who has publicly streamed their therapy sessions (for legal reasons he doesn't actually call them therapy sessions). The thing that struck me is just how vulnerable a state of mind people are in during therapy, and the very subtle way that assumptions about reality can be pushed on them.

So if you consider how impressionable people are when receiving therapy, and how its becoming increasingly common for adults to use chatbots for therapy, and how it's becoming increasingly common for kids to grow up with chatbots as friends, then it really makes the potential impact of subtle value assumptions in these models loom large.

There's the Dodo Bird Verdict take where the precise practice of psychotherapy doesn't matter so much as certain very broad bounds of conduct are followed. If an hour talking with a slightly sycophantic voice is all it takes to ground people, that'll be surprising to me, but it's not bad.

Of course, there are common factors to the common factors theory. Some of the behaviors that are outside of those bounds of conduct can definitely fuck someone up. Some of them aren't very likely for an LLM to do (I guess it's technically not impossible for an LLM to 'sleep with' a patient if we count ERP, but it's at least not a common failure mode), but others are things LLMs are more likely to do that human therapists won't even consider ('oh it's totally normal to send your ex three million texts at 2am, and if they aren't answering right away that's their problem').

I'm a little hesitant to take any numbers for chatGPT psychosis seriously. The extent reporting is always tied to the most recognizable LLM is a red flag, and self_made_human has made a pretty good argument that we wouldn't be able to distinguish the signal from the noise even presuming there were signal.

On the other hand, I know about mirror dwellers. People can and do use VR applications as a low-stress environment for developing social skills or overcoming certain stressors. But some portion do go wonky in a way that I'm really skeptical they were breaking before. Even if they were going to have problems, otherwise, I don't think they'd have been the same problems.

((On the flip side, I'll point out that Ani and Bad Rudi are still MIA from iOS. I would not be surprised to see large censorship efforts aimed at even all-ages-appropriate LLM actors, if they squick the wrong people out.))

Funnily enough, I have an AAQC on the Dodo Bird model.

It seems like the most parsimonious explanation, but I would say that it doesn't disqualify therapy as a valid therapeutic intervention. A lot of the people being sent to therapy do not have access to a discreet, thoughtful friend who will keep secrets. That might well be a service worth paying for. What isn't in dispute is that therapy works in the first place, even the models that use bonkers frameworks.

I see no reason LLMs can't make for okay therapists, and they are definitely better than the quality of some I have personally met.

At the end of the day, I'm just glad that therapy isn't the only tool in my arsenal, and I can dish out the fun drugs. Psychologists are so painfully restricted in what they can do.

What's a "mirror dweller"?

VRChat (and most other social virtual reality worlds) allow people to choose an avatar. At the novice user level, these avatars just track the camera position and orientation, provide a walking animation, and have a limited number of preset emotes, but there's a small but growing industry for extending that connection. Multiple IMUs and/or camera tricks can track limbs, and there are tools used by more dedicated users for face and eye and hand tracking. These can allow avatar's general pose (and sometimes down to finger motions) to match that of the real-world person driving it, sometimes with complex modeling going on where an avatar might need to represent body parts that the person driving it doesn't have.

While you can go into third-person mode to evaluate how well these pose estimates are working in some circumstances, that's impractical for a lot of in-game use, both for motion sickness reasons and because it's often disruptive. So most VRChat social worlds will have at least one virtual mirror, usually equivalent to at least a eight-foot-tall-by-sixteen-foot-wide space, very prominently placed to check things like imu drift.

Some people like these mirrors. Really like them. Like spend hours in front of them and then go to sleep while in VR-level like them. This can sometimes be a social thing where groups will sit in front of a mirror and even do some social discussions together, or sometimes they'll be the one constantly watching the mirror while everyone is else doing their own goofy stuff. But they're the mirror dwellers.

I'm not absolutely sure whatever's going on with them is bad, but it's definitely a break in behavior that was not really available ten years ago.

Thanks, I hate it.

I finally took the plunge and joined an art discord a couple months back, and VR chat is a big part of their social activity. I actually have an old VR rig I've never bothered setting up, and briefly considered joining in, but increasingly think it's better to leave it on the shelf.