site banner

Culture War Roundup for the week of March 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

I think it's important to keep in mind (and this applies to the AI side of things also) that the U.S. is a weird country and that weirdness has benefits and costs.

One of the things that pops up is that the number of demands on U.S. physicians extends beyond beyond academic medical knowledge, this is one of the reasons that individuals with perfectly excellent medical knowledge sometimes don't survive the retraining process.

Doctors here have to survive the vagaries of our legal system, malpractice environment, U.S. patients (who are...different) and other factors. This requires both higher standards of behavior and other standards of behavior.

Much of this I would happily get rid of, but that's not usually the targets complainers want to go for. Tort reform would solve a lot of problems.

With respect to the physician shortage, it mostly isn't real.* If you live in a reasonably sized area you can get a PCP. You may need to look around, and you may have to establish care before you get sick. But the problem is that U.S. patients are demanding and don't want to think ahead and want instant gratification. If you live outside of a major area you may not be able to find a doctor but that's because of the allocation problem. Midlevels don't fix the allocation problem because they also don't want to move to those areas and then don't.

For specialists the situation is more complicated, some of them can't be replaced by midlevels, cant really increase training amounts, others have had shortages get worsened by midlevels (like cardiologists getting over consulted, same for neurology and some others). Others like psychiatry have a situation where people just stopped taking insurance and do cash pay because its more lucrative leading to a phantom shortage.

*Strong disconnect between the feel people have about this and the actual numbers.

I can't really argue against you, can I? It's been like 25 years and change since I was in the States, and would have been like 6 months if my visa didn't bounce. Perhaps American patients genuinely are more demanding and entitled, I can promise you I have seen more than my fair share of demanding and entitled patients in the UK (while the NHS is free, a doctor is not considered that different to a bus driver). I have seen demographic disasters/marvels that have simultaneously made me lose hope in humanity and rekindled my optimism for medical progress. Like seriously, how the fuck are some of these people still alive?

However, I am confident that there is a severe physician shortage when you leave urbanity. Several states, from memory including Texas and Mississippi, have quite recently massively relaxed the requirements for foreign doctors to come and practice as long as they go to underserved areas. They don't even need to have cleared the whole USMLE, let alone have completed a residency. You bet I follow that kind of stuff like my life depends on it, in a very meaningful sense, it does.

The fact that this happens at all, let alone in like 3 or 4 states (could be more) is suggestive of something. Of course the typical doctor wants to be in the Big City and will fight to go there. In India or the UK, where doctors are more abundant, the sheer competition will force some of them to take up less than ideal appointments, albeit not at major loss to earning potential.

I've worked in for-profit systems, in not-for-profit setups, and plenty in between. I would be rather surprised if the US was qualitatively different and not merely quantitatively so. I would happily take the hassle of insurance and the extra medicolegal liability if I could double or triple my income. I'll take the risk of getting shot or knifed. You guys have it very good, by global terms, at least when done with residency.

This is not really an argument that we should let every dude with half a frontal lobe practice medicine, or that doctors should be paid worse. But I think you heavily discount the risk posed by AI, both present and future. I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them. It's not going to do surgery for you today, or even next year, but many branches of our profession do not rely on procedural skills to make a living (I say, while crying that I have to do cannulations, catheterization or conduct ECGs because our nurses are useless).

I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them

Playing devil's advocate, have you seen a medium-IQ patient try and do so? I think this is what @Throwaway05 was alluding to - experts gather appropriate data and naturally pitch things a certain way. In a very meaningful sense, both the biological and silicon doctors have been trained on the same data and even when your family try to flummox GPT, they will unconsciously produce the kind of gotchas that broadly match the training set.

Now take your average patient. They may mis-state, misinterpret or lie about their symptoms. They may be panicked and exaggerate, or be tough guys and undersell. Standard LLMs have a tendency to trust their input sources too much, although you can train them out of this. LLMs also have a massively reduced amount of input compared to the amount of info you would get in even 5s of video, and they can panic - I have called an ambulance on the instructions of one and although that decision was certainly defensible it was objectively incorrect and I don't think a real doctor would have done so.

I would not expect even a custom-LLM to do nearly as well as a doctor-LLM centaur, and I know that tests found otherwise but those tests did not AFAIK deal with real-world end-to-end consultations.

Fortunately for me, computing is similar. I have experimented with agents etc. and they don't do good work left to themselves. I find AI massively helpful in my work but I'm confident my CEO couldn't get even close to the same results using the same tools.

They may mis-state, misinterpret or lie about their symptoms.

Yes this is going to be a hard problem, and self_made does give a counter argument, but you'll also see downstream problems that cause angst.

For instance, ADHD evaluations. Most people will say the right things and at the same time note really have adult onset ADHD. A psychiatrist gets to do this weird dance and figure out how often to bend. The LLM is either going to be confused and offer everyone stimulants or be rebelled against after it refuses anyone.

Anyone else with motivation issues will have the same problem - testosterone? Inappropriate antibiotics?

People who want conservative or aggressive management for something will rapidly figure out how to do so, and all kinds of other bullshit.

People have response variety and can make a variety of judgements.

As could true AI if we get that, but we dont have it.

I can't say that I've ever intentionally tried to get someone to do that experiment in front of me, but I have, on request, reviewed the advice given by various LLMs and haven't really seen any egregious errors in a longtime.

Poor information will degrade performance in anyone, including human physicians. Our usual approach is to assume honesty and then update towards the possibility that someone is lying or mistaken when further evidence comes in. Of course, it depends on what exactly someone says and how they present. If a clearly twitchy, disheveled guy shows up and claims to have never done alcohol or drugs, I would be rather suspicious. If you have herpetic lesions on your cock and claim you've never slept with anyone but your wife, I will not take your testimony entirely at face value.

I am obviously at an advantage asking ChatGPT questions, because I have at least a vague idea of the kind of information that is important to disclose even outside psychiatry, I know the kind of issues that would make a cardiologist or ophthalmologist sit up straight and lock in. Yet it will consistently ask me for clarification or for more information to narrow down the answer-space, and I have no reason to assume it wouldn't do this for a layperson. In fact, it might even be more thorough, because it might (correctly) assume that there's more room for error or misunderstanding in that scenario.

Right now, a Doctor-LLM cyborg is probably superior to either alone, at least in realistically messy scenarios. But even just the LLM will, at least with the exact same information and affordances, perform quite well. Most of the edge cases can be covered by the assistance of a mid-level or a junior doctor.

I can tell you that my bosses are too old to have learned to rely on AI as heavily as I do, so assuming they don't, I can match their performance while using it. Similarly, a smart med student might be able to match my performance if I was stripped of AI assistance while they got to use it. This gap is, as far as I can tell, becoming increasingly narrow with time.

With respect to AI, I'm sure it will get it done eventually, and I am sure it has tremendous pressure to do the job.

AI can do simple fact recall, it struggles much more to deal with a patient who tells you he fell and can't move his leg at all when really it's he won't move his leg due to pain. The AI will probably assume a neurologic deficit and trigger a work-up for that. A physician will poke the patient and see him move and assume traumatic injury. Someone needs to get clean data to give to the AI for outsourcing to work (for now).

And yes some states are doing alternative paths to practice, this is exploding in popularity and it's a solution to the allocation problem. It may even be a good one, it seems ineffective so far though because people can't get malpractice coverage or privileges without actual training - the liability risk is too high.

I am sorry to say this, but you are woefully underestimating the ability of AI.

I threw your hypothetical into ChatGPT, the paid version, and even before it finished thinking (it's doing a lot of thinking, and it all seems relevant) it immediately noted:

The big early split is whether he “can’t move it” because of pain/mechanical disruption, because the limb has lost perfusion or nerve function, or because the problem is actually central, such as spine or stroke. That distinction changes the whole pathway immediately.

Emphasis added.

In the full workup, it mentions:

“Is it too painful to move, or does it feel weak/dead/numb?

Seriously, try this for yourself. Get a paid subscription and try and find a clinical scenario where the evidence you have provided has a definitive answer, which the AI is unable to diagnose even after the same amount of effort a human clinician would devote.

Here's the full conversation, if you want to take a look:

https://chatgpt.com/share/69be7d62-ad60-800b-a335-bf527ee5168e

Someone needs to get clean data to give to the AI for outsourcing to work (for now).

AI can't do a lot of clinical work, because it lacks hands. But it can borrow someone else, say an NP or PA or just a nurse. And then it can do things that would otherwise take a human doctor.

I don't want to undermine our profession, but you have to understand that I value probity over professional solidarity, at least if challenged. When I do disclose how close we are to replacement, it's where it doesn't matter, I don't want to lose my job either, but I can rarely bring myself to mislead when I genuinely believe otherwise. I'm not accusing you of being misleading, by any means, but consider this example a data point that you're not considering how scarily good LLMs can be. We can take this to DMs if you prefer.

No I get they can do fancy things, but translating those in a real environment is HARD. We will get there eventually, but we can't get self-driving cars approved because of the liability and failures - medicine has just has much cost pressure to do so but as much contrary pressures (if not more because of people's demand for a person).

We also have a side game where LLMs are killing skills and knowledge development by outsourcing thinking (ex: in note writing). That will be a separate problem.

It's 2026, the US has Waymos in Phoenix, SF, LA and slightly more limited availability in Austin, Atlanta, Miami, Dallas, Houston...

That is a lot of of people (50 to 60 million people nominally served). And that is today.

It's one thing to say self-driving isn't going to happen when they're puttering around in a DARPA course and knocking over half the cones. A whole different kettle of fish today, when a good fraction of your country can step out and order one through Uber.

It's like claiming we can't land humans on the moon after we have manned spaceflight and rovers on the surface. While we have hundreds of billions of dollars (or pretty much all the positive growth in the US stock market) and thousands of the world's smartest people, some of them drawing literal billion dollar salaries working diligently to make it happen, as is the case with AI.

I would not bet against it. Maybe it's more academic for you, you've probably had a decently long and well-remunerated career. On the other hand, I face a very real threat of being deported after outliving my welcome and then coming home to a ship that is simultaneously on fire and sinking. Oh well, at least I can probably make it through my psych residency before things get that bad, the ladder is rapidly ascending beneath me every time I look down. We can argue about timelines, it might take 5 years. It might take 10. It won't take 20, I'm confident of that much, and I really put a lot of thought into this.

I wish I am wrong, and you are right. But if wishing were horses, I'd have made PETA very mad by making a living life raft to the States already.

Sure self-driving cars are getting there, but they aren't fully in use yet, and the legal tests aren't all the way there yet.

AI may in fact replace everyone at some point, doctors have more physical work, patient interaction work, need to be a liability sponge, and so on than most other white collar work.

Yes the financial pressure to replace us is higher, but by the time they come for us in a serious way everyone else is gone too. Especially in psychiatry - you should have some safety there. More likely is an intractable increase in volume due to AI assistance.

In any case, even the finance people who love this shit are starting to push back against the way our economy is overweighted.

LLMs might just end up getting dropped as a boondoggle before they apply to too many use cases.

All fair points, but you have to consider that I'm pricing them in. I think that the regulatory and legal hurdles associated with replacing human doctors will buy us anywhere from 1-5 years once automation-induced unemployment has really taken off it. It's already started, but I'm talking >25% of the population being laid off and unable to find a job that pays nearly as well.

Even if it "merely" augment humans, the elasticity of demand in medicine is not literally infinite. A world where 90% of the doctors are laid off while 10% supervise mid-levels and oversee LLMs (both as troubleshooters and liability sponges) is almost as bad as 100% of us being laid off.

Once a large fraction of the population is unemployed and baying for relief, how long do you think governments can hold out and keep doctors on a pedestal? The AMA is not all-powerful either. It's even worse in the UK, the NHS is floundering, and Rishi Sunak wanted to keep us uppity doctors in check by replacing us with AI and midlevels years ago. They couldn't pull it off then, but it is an increasingly real possibility now. The very fact that doctors are rare and expensive in the US makes you excellent targets. You can't hold out forever. Once a single country or even a state bites the bullet, and succeeds (or doesn't crash and burn), there's going to be a domino effect.

Hell, even places where doctors are more abundant and cheaper aren't safe, mostly because AI is even cheaper and because they're poor countries without the luxury of swallowing as many systemic inefficiencies as the US system can tolerate. The relevant comparison is the delta in pay between the next lower rung of the ladder + a ChatGPT subscription vs a human doctor (or drastically fewer human doctors). That is a very large financial attractor, and barriers are not airtight or as robust as either of us would like.

Anyway, give it 5 more years and we'll see who's right. It's not like I think that being correct about this will change the trajectory of my life, I can only try and lessen the blow.

I mean if everybody's fucked everybody is fucked.

I'm not saying we are immune to getting fucked, we'll just be along with everyone else.

That should be a comfort!

More comments

Interesting result. Out of curiosity, do you have a user-set preamble that you use as well as the base question? Generally when GPT starts using words like 'brutally practical' it's because the user has requested that.

Not really. My personalization settings tell it I'm a psychiatry resident and a transhumanist. I have some stylistic instructions, such as to never do calculations without tool-calls, but nothing that should make it act in an unusual manner. There is nothing in recent chat history that should change this, I do have memory enabled. It's on the default tone and personality settings too.

Cool, just interested in how you use it.

My main thing is ‘don’t use search unless explicitly instructed’. Base knowledge almost always seems to be better than whatever slop it finds on the first page of Google.

Huh. I would expect the exact opposite, but I've left search on since pretty much it was an option and haven't checked. Can't say it's been an issue in practice.