site banner

Culture War Roundup for the week of March 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them

Playing devil's advocate, have you seen a medium-IQ patient try and do so? I think this is what @Throwaway05 was alluding to - experts gather appropriate data and naturally pitch things a certain way. In a very meaningful sense, both the biological and silicon doctors have been trained on the same data and even when your family try to flummox GPT, they will unconsciously produce the kind of gotchas that broadly match the training set.

Now take your average patient. They may mis-state, misinterpret or lie about their symptoms. They may be panicked and exaggerate, or be tough guys and undersell. Standard LLMs have a tendency to trust their input sources too much, although you can train them out of this. LLMs also have a massively reduced amount of input compared to the amount of info you would get in even 5s of video, and they can panic - I have called an ambulance on the instructions of one and although that decision was certainly defensible it was objectively incorrect and I don't think a real doctor would have done so.

I would not expect even a custom-LLM to do nearly as well as a doctor-LLM centaur, and I know that tests found otherwise but those tests did not AFAIK deal with real-world end-to-end consultations.

Fortunately for me, computing is similar. I have experimented with agents etc. and they don't do good work left to themselves. I find AI massively helpful in my work but I'm confident my CEO couldn't get even close to the same results using the same tools.

They may mis-state, misinterpret or lie about their symptoms.

Yes this is going to be a hard problem, and self_made does give a counter argument, but you'll also see downstream problems that cause angst.

For instance, ADHD evaluations. Most people will say the right things and at the same time note really have adult onset ADHD. A psychiatrist gets to do this weird dance and figure out how often to bend. The LLM is either going to be confused and offer everyone stimulants or be rebelled against after it refuses anyone.

Anyone else with motivation issues will have the same problem - testosterone? Inappropriate antibiotics?

People who want conservative or aggressive management for something will rapidly figure out how to do so, and all kinds of other bullshit.

People have response variety and can make a variety of judgements.

As could true AI if we get that, but we dont have it.

I can't say that I've ever intentionally tried to get someone to do that experiment in front of me, but I have, on request, reviewed the advice given by various LLMs and haven't really seen any egregious errors in a longtime.

Poor information will degrade performance in anyone, including human physicians. Our usual approach is to assume honesty and then update towards the possibility that someone is lying or mistaken when further evidence comes in. Of course, it depends on what exactly someone says and how they present. If a clearly twitchy, disheveled guy shows up and claims to have never done alcohol or drugs, I would be rather suspicious. If you have herpetic lesions on your cock and claim you've never slept with anyone but your wife, I will not take your testimony entirely at face value.

I am obviously at an advantage asking ChatGPT questions, because I have at least a vague idea of the kind of information that is important to disclose even outside psychiatry, I know the kind of issues that would make a cardiologist or ophthalmologist sit up straight and lock in. Yet it will consistently ask me for clarification or for more information to narrow down the answer-space, and I have no reason to assume it wouldn't do this for a layperson. In fact, it might even be more thorough, because it might (correctly) assume that there's more room for error or misunderstanding in that scenario.

Right now, a Doctor-LLM cyborg is probably superior to either alone, at least in realistically messy scenarios. But even just the LLM will, at least with the exact same information and affordances, perform quite well. Most of the edge cases can be covered by the assistance of a mid-level or a junior doctor.

I can tell you that my bosses are too old to have learned to rely on AI as heavily as I do, so assuming they don't, I can match their performance while using it. Similarly, a smart med student might be able to match my performance if I was stripped of AI assistance while they got to use it. This gap is, as far as I can tell, becoming increasingly narrow with time.