site banner

Culture War Roundup for the week of March 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Nah, I am fond of you, even though I agree we have our disagreements. I can't even be mad at the AMA, they're not responsible for my med school being subpar and lazy.

I was just in a very bad place yesterday, largely to do with the fact that I work in the UK, where I ended up at precisely because I'm still not eligible for the USMLE. That makes the whole situation somewhere between academic and painful to engage with, what difference does it make what I think?

(This standard, applied fairly, would preclude most engagement on this platform. I will cop guilty to mild hypocrisy.)

For what it's worth, I think American doctors are world class, only closely rivaled by places like Singapore or other very rich First World countries. The UK is far more uneven, even if I regularly meet doctors who are both better than me and are at a level I would consider well past competent. India? All I can say is that there are plenty of doctors who are world-class, and I know many who would easily fit in in the US. And a lot of idiots who would find a way to kill a cadaver.

Where am I in the grand scheme of things? Idk. My confidence is shot in many ways, but I think the objective evidence, at least from exam results, is that I am above average with respect to my peers in India or the UK. I am not a senior physician, so I suppose that is good enough. I can handle most things in psychiatry, at least if I have time, Google and ChatGPT to help me. I absolutely can match my bosses if I have those tools at hand, which I do have in most scenarios that aren't academic assessments.

However, the medical training pipeline in the US is a bad joke. Mandatory pre-med? What the absolute flying fuck. I don't need my surgeon to have read Seneca in order to harvest my appendix. Even harder stuff like chemistry or microbiology is a waste of time, the syllabus covers everything we need to know. There is very good reason that the rest of the world doesn't do this.

Then there's the fact that the sheer explosion in mid-levels is because of the doctor shortage. I think that, in objective terms, there is nothing wrong with making tradeoffs between quality and availability. Otherwise we could have only the single most talented med student per year become a doctor, and have them train for 50 years so they can handle their own geriatric care right before the next candidate takes them off for MAID.

If you can't legally do that, since the quality of medical training is sacrosanct, then the system will try very hard to route around doctors. I have no reason to think the US has found the global optimum, and I think you guys are too strict. Obviously, I wouldn't endorse any measures that dilute physician wages to the point nobody bothers, but that is very unlikely to happen anyway.

If you float a bill that says: all global physicians who come from a very specific med school in India, have practiced in the UK, are at least six feet tall and right-handed, are welcome to enter the US? It would have my vote. But I am obviously biased.

Anyway, you needed up coaxing me into a more substantial reply than intended, so I hope both you and @DirtyWaterHotDog are happy. Still not everything I have to say, or could say, but it's something.

I think it's important to keep in mind (and this applies to the AI side of things also) that the U.S. is a weird country and that weirdness has benefits and costs.

One of the things that pops up is that the number of demands on U.S. physicians extends beyond beyond academic medical knowledge, this is one of the reasons that individuals with perfectly excellent medical knowledge sometimes don't survive the retraining process.

Doctors here have to survive the vagaries of our legal system, malpractice environment, U.S. patients (who are...different) and other factors. This requires both higher standards of behavior and other standards of behavior.

Much of this I would happily get rid of, but that's not usually the targets complainers want to go for. Tort reform would solve a lot of problems.

With respect to the physician shortage, it mostly isn't real.* If you live in a reasonably sized area you can get a PCP. You may need to look around, and you may have to establish care before you get sick. But the problem is that U.S. patients are demanding and don't want to think ahead and want instant gratification. If you live outside of a major area you may not be able to find a doctor but that's because of the allocation problem. Midlevels don't fix the allocation problem because they also don't want to move to those areas and then don't.

For specialists the situation is more complicated, some of them can't be replaced by midlevels, cant really increase training amounts, others have had shortages get worsened by midlevels (like cardiologists getting over consulted, same for neurology and some others). Others like psychiatry have a situation where people just stopped taking insurance and do cash pay because its more lucrative leading to a phantom shortage.

*Strong disconnect between the feel people have about this and the actual numbers.

I can't really argue against you, can I? It's been like 25 years and change since I was in the States, and would have been like 6 months if my visa didn't bounce. Perhaps American patients genuinely are more demanding and entitled, I can promise you I have seen more than my fair share of demanding and entitled patients in the UK (while the NHS is free, a doctor is not considered that different to a bus driver). I have seen demographic disasters/marvels that have simultaneously made me lose hope in humanity and rekindled my optimism for medical progress. Like seriously, how the fuck are some of these people still alive?

However, I am confident that there is a severe physician shortage when you leave urbanity. Several states, from memory including Texas and Mississippi, have quite recently massively relaxed the requirements for foreign doctors to come and practice as long as they go to underserved areas. They don't even need to have cleared the whole USMLE, let alone have completed a residency. You bet I follow that kind of stuff like my life depends on it, in a very meaningful sense, it does.

The fact that this happens at all, let alone in like 3 or 4 states (could be more) is suggestive of something. Of course the typical doctor wants to be in the Big City and will fight to go there. In India or the UK, where doctors are more abundant, the sheer competition will force some of them to take up less than ideal appointments, albeit not at major loss to earning potential.

I've worked in for-profit systems, in not-for-profit setups, and plenty in between. I would be rather surprised if the US was qualitatively different and not merely quantitatively so. I would happily take the hassle of insurance and the extra medicolegal liability if I could double or triple my income. I'll take the risk of getting shot or knifed. You guys have it very good, by global terms, at least when done with residency.

This is not really an argument that we should let every dude with half a frontal lobe practice medicine, or that doctors should be paid worse. But I think you heavily discount the risk posed by AI, both present and future. I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them. It's not going to do surgery for you today, or even next year, but many branches of our profession do not rely on procedural skills to make a living (I say, while crying that I have to do cannulations, catheterization or conduct ECGs because our nurses are useless).

I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them

Playing devil's advocate, have you seen a medium-IQ patient try and do so? I think this is what @Throwaway05 was alluding to - experts gather appropriate data and naturally pitch things a certain way. In a very meaningful sense, both the biological and silicon doctors have been trained on the same data and even when your family try to flummox GPT, they will unconsciously produce the kind of gotchas that broadly match the training set.

Now take your average patient. They may mis-state, misinterpret or lie about their symptoms. They may be panicked and exaggerate, or be tough guys and undersell. Standard LLMs have a tendency to trust their input sources too much, although you can train them out of this. LLMs also have a massively reduced amount of input compared to the amount of info you would get in even 5s of video, and they can panic - I have called an ambulance on the instructions of one and although that decision was certainly defensible it was objectively incorrect and I don't think a real doctor would have done so.

I would not expect even a custom-LLM to do nearly as well as a doctor-LLM centaur, and I know that tests found otherwise but those tests did not AFAIK deal with real-world end-to-end consultations.

Fortunately for me, computing is similar. I have experimented with agents etc. and they don't do good work left to themselves. I find AI massively helpful in my work but I'm confident my CEO couldn't get even close to the same results using the same tools.

I can't say that I've ever intentionally tried to get someone to do that experiment in front of me, but I have, on request, reviewed the advice given by various LLMs and haven't really seen any egregious errors in a longtime.

Poor information will degrade performance in anyone, including human physicians. Our usual approach is to assume honesty and then update towards the possibility that someone is lying or mistaken when further evidence comes in. Of course, it depends on what exactly someone says and how they present. If a clearly twitchy, disheveled guy shows up and claims to have never done alcohol or drugs, I would be rather suspicious. If you have herpetic lesions on your cock and claim you've never slept with anyone but your wife, I will not take your testimony entirely at face value.

I am obviously at an advantage asking ChatGPT questions, because I have at least a vague idea of the kind of information that is important to disclose even outside psychiatry, I know the kind of issues that would make a cardiologist or ophthalmologist sit up straight and lock in. Yet it will consistently ask me for clarification or for more information to narrow down the answer-space, and I have no reason to assume it wouldn't do this for a layperson. In fact, it might even be more thorough, because it might (correctly) assume that there's more room for error or misunderstanding in that scenario.

Right now, a Doctor-LLM cyborg is probably superior to either alone, at least in realistically messy scenarios. But even just the LLM will, at least with the exact same information and affordances, perform quite well. Most of the edge cases can be covered by the assistance of a mid-level or a junior doctor.

I can tell you that my bosses are too old to have learned to rely on AI as heavily as I do, so assuming they don't, I can match their performance while using it. Similarly, a smart med student might be able to match my performance if I was stripped of AI assistance while they got to use it. This gap is, as far as I can tell, becoming increasingly narrow with time.

With respect to AI, I'm sure it will get it done eventually, and I am sure it has tremendous pressure to do the job.

AI can do simple fact recall, it struggles much more to deal with a patient who tells you he fell and can't move his leg at all when really it's he won't move his leg due to pain. The AI will probably assume a neurologic deficit and trigger a work-up for that. A physician will poke the patient and see him move and assume traumatic injury. Someone needs to get clean data to give to the AI for outsourcing to work (for now).

And yes some states are doing alternative paths to practice, this is exploding in popularity and it's a solution to the allocation problem. It may even be a good one, it seems ineffective so far though because people can't get malpractice coverage or privileges without actual training - the liability risk is too high.

I am sorry to say this, but you are woefully underestimating the ability of AI.

I threw your hypothetical into ChatGPT, the paid version, and even before it finished thinking (it's doing a lot of thinking, and it all seems relevant) it immediately noted:

The big early split is whether he “can’t move it” because of pain/mechanical disruption, because the limb has lost perfusion or nerve function, or because the problem is actually central, such as spine or stroke. That distinction changes the whole pathway immediately.

Emphasis added.

In the full workup, it mentions:

“Is it too painful to move, or does it feel weak/dead/numb?

Seriously, try this for yourself. Get a paid subscription and try and find a clinical scenario where the evidence you have provided has a definitive answer, which the AI is unable to diagnose even after the same amount of effort a human clinician would devote.

Here's the full conversation, if you want to take a look:

https://chatgpt.com/share/69be7d62-ad60-800b-a335-bf527ee5168e

Someone needs to get clean data to give to the AI for outsourcing to work (for now).

AI can't do a lot of clinical work, because it lacks hands. But it can borrow someone else, say an NP or PA or just a nurse. And then it can do things that would otherwise take a human doctor.

I don't want to undermine our profession, but you have to understand that I value probity over professional solidarity, at least if challenged. When I do disclose how close we are to replacement, it's where it doesn't matter, I don't want to lose my job either, but I can rarely bring myself to mislead when I genuinely believe otherwise. I'm not accusing you of being misleading, by any means, but consider this example a data point that you're not considering how scarily good LLMs can be. We can take this to DMs if you prefer.

Interesting result. Out of curiosity, do you have a user-set preamble that you use as well as the base question? Generally when GPT starts using words like 'brutally practical' it's because the user has requested that.

Not really. My personalization settings tell it I'm a psychiatry resident and a transhumanist. I have some stylistic instructions, such as to never do calculations without tool-calls, but nothing that should make it act in an unusual manner. There is nothing in recent chat history that should change this, I do have memory enabled. It's on the default tone and personality settings too.

we could have only the single most talented med student per year become a doctor, and have them train for 50 years so they can handle their own geriatric care right before the next candidate takes them off for MAID

I would love somewhere like North Korea to do this and proclaim with unabashed truth and sincerity that they have the best-trained doctor in the world. People from all over could come to gaze in awe at "The Doctor".

I have a pedigree of medical practice that stretches back 4 generations, probably longer if we're willing to accept qualifications that aren't formal. I would volunteer, as long as they give me a VPN and part of Kim's food budget. That is probably the best job security I can aspire to with AI breathing down my neck.