site banner

Culture War Roundup for the week of March 16, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

The constraint is fundamentally ensuring adequate training quality

Sounds like a convenient way to set an arbitrary bar that limits the supply of doctors. Lawyers went through a rapid expansion in supply, and it did not reduce the quality of law. If anything, allowing the competition to take place in the open has increased the bar for getting into elite law-schools. But now, there is also a sufficient supply of mediocre lawyers who fulfill mundane legal duties.

People with more knowledge than anyone here (including me) have been working this problem for a long time

I apologize for sounding harsh, but that is a bad justification. More so on a forum that prides itself in identifying collective incompetence and blind-spots in elite circles. This is the common excuse of Bureaucrats & careerists who love abstractions more than action.

Every year a large tranche of students doesn't advance to the next level of training

Don't the abusive conditions of residency have a lot to do with why people drop out ?

The other piece has been an explosions in mid-levels, they suck frankly. Guess who makes a better cardiologist?

Aren't mid-levels explicitly 'not cardiologists'. My understanding is that majority of issues are obvious and having a mediocre individual take care of it is a correct allocation of resources. I have a heart problem I have looked at by a cardiologist every 2 years (back in India). The most valuable thing he does is to look at my ultra sound. The ECG is taken by a mid-level and he does the ultrasound because I am long time customer, but a mid-level could do that too. The highest value thing he does is review the ultra sound, and then tell me that my heart is still okay and I am good to go.

His resources are best used for the last part of my checkup (the review) and to spend majority of his time on real emergencies. What's wrong with that ?

Stealing doctors from other countries is a popular solution and it has some ethical and practical problems

eh, I disagree on both points. The ethical problems have never been an issue in the US. Brain-gain is a fundamental national value. Practically, the USMLE + residency matching is hellish for international candidates. I'll let @self_made_human chime in, but it USMLE qualified doctors being incompetent is setting off a bullshit alarm for me.

If I had to speculate, the bottleneck for international candidates is the residency. And it is easiest to get residency slots in the least-desirable towns and cities. It's possible that top international candidates would never agree to waste 3 extra years in the middle of bumfuck nowhere, and therefore only mediocre candidates apply. Top candidates are in competitive fields like cardiology, which needs them to waste about 6-7 extra years in bumfuck nowhere, making it more unlikely that they'll apply."

AI will come eventually but it isn't ready yet.

You'd be surprised. The cutting edge of AI (complex agent swarms) is years ahead of what people think is the cutting edge. (chatgpt subscription).

For example, a chatgpt subscription is 20$/month. I routinely burn 100s of dollars/day in LLM costs. The strongest models are capable of insane things, but it feels like only people in some small circles have realized it so far.

It's still not ready yet, but objects in the mirror are closer than they appear.

I apologize for sounding harsh, but that is a bad justification. More so on a forum that prides itself in identifying collective incompetence and blind-spots in elite circles. This is the common excuse of Bureaucrats & careerists who love abstractions more than action.

The point is that most people who have an opinion on this wildly misunderstand the reality of the situation with respect to the role of the AMA, where the shortages are, how much physicians get paid (millions a year??? see down thread), how much of US healthcare spending is on doctors and so on.

Having an opinion does not mean one has a realistic understanding of what is happening and how to fix it.

To me this is a classic Gell-Mann amnesia issue. I see how firm and misinformed most people who post on this topic here are and it makes me trust the experts vs. posters here on topics I am not knowledgeable on.

This is the issue that gives me pause with medical education questions. What proportion of procedures are fairly trivial and easily enough handled by a mid-level, and furthermore what proportion of high complexity procedures are creating equivalent value to their cost.

It looks to me that getting population life expectancy up to the mid 70s is fairly trivial for modernized economies. I'd imagine a lot of that is picking the proverbial low-hanging fruit of basic sanitization, improving maternal health etcetera. On top of that, a bunch of public health considerations like diet and general longevity seem to be worth half a decade or so. How much is endless battling over medical funding actually doing

Unrelated bit first - great user name.

So the issue is that the triage is the work for a lot of medicine. That's basically a vast majority of what emergency medicine is for.

For a simple example (in the sense that triage nursing programs and AI can handle this one):

Someone comes in with chest pain. Do you give them some tums or give them a hundred plus thousand dollar cardiac work-up? Do they get better in three seconds or die.

Knowing to ask things like "hey did you just have a 3 pounds of spaghetti with red sauce?" lets you figure it out.

A more complicated question might be something like "this person fell, do you scan their head before sending them home?" which has a lot of research, debate, and need for asking patients (who may be unreliable) very specific questions.

Once you've triaged and diagnosed them things get simpler, but at that point to a large extent the work is already done.

Also - Americans need someone to sue.

Lawyers went through a rapid expansion in supply, and it did not reduce the quality of law.

Citation very much needed for that one.

If anything, allowing the competition to take place in the open has increased the bar for getting into elite law-schools.

The bar is still quite low for those with the correct skin color and/or genitalia, and many schools are trying to find ways to lower it even more by not requiring applicants to take the LSAT (which has also been watered down).

But now, there is also a sufficient supply of mediocre lawyers who fulfill mundane legal duties.

Not the case for many non-metro areas in the U.S. Law schools are cranking out all kinds of graduates, many underqualified, yet lots of rural areas have fewer lawyers per capita than they did in the 80s and 90s. See the whole "legal deserts" discussion.

I have been intentionally sitting this whole conversation out. It hits too close to home. There is also a lot going on atm, but if you do want my full take I can come back to this, probably as another comment. This only serves to let you know I noticed.

Leaving me hanging bro!

(no its okay I know you disagree with me on some of this, especially AI)

Nah, I am fond of you, even though I agree we have our disagreements. I can't even be mad at the AMA, they're not responsible for my med school being subpar and lazy.

I was just in a very bad place yesterday, largely to do with the fact that I work in the UK, where I ended up at precisely because I'm still not eligible for the USMLE. That makes the whole situation somewhere between academic and painful to engage with, what difference does it make what I think?

(This standard, applied fairly, would preclude most engagement on this platform. I will cop guilty to mild hypocrisy.)

For what it's worth, I think American doctors are world class, only closely rivaled by places like Singapore or other very rich First World countries. The UK is far more uneven, even if I regularly meet doctors who are both better than me and are at a level I would consider well past competent. India? All I can say is that there are plenty of doctors who are world-class, and I know many who would easily fit in in the US. And a lot of idiots who would find a way to kill a cadaver.

Where am I in the grand scheme of things? Idk. My confidence is shot in many ways, but I think the objective evidence, at least from exam results, is that I am above average with respect to my peers in India or the UK. I am not a senior physician, so I suppose that is good enough. I can handle most things in psychiatry, at least if I have time, Google and ChatGPT to help me. I absolutely can match my bosses if I have those tools at hand, which I do have in most scenarios that aren't academic assessments.

However, the medical training pipeline in the US is a bad joke. Mandatory pre-med? What the absolute flying fuck. I don't need my surgeon to have read Seneca in order to harvest my appendix. Even harder stuff like chemistry or microbiology is a waste of time, the syllabus covers everything we need to know. There is very good reason that the rest of the world doesn't do this.

Then there's the fact that the sheer explosion in mid-levels is because of the doctor shortage. I think that, in objective terms, there is nothing wrong with making tradeoffs between quality and availability. Otherwise we could have only the single most talented med student per year become a doctor, and have them train for 50 years so they can handle their own geriatric care right before the next candidate takes them off for MAID.

If you can't legally do that, since the quality of medical training is sacrosanct, then the system will try very hard to route around doctors. I have no reason to think the US has found the global optimum, and I think you guys are too strict. Obviously, I wouldn't endorse any measures that dilute physician wages to the point nobody bothers, but that is very unlikely to happen anyway.

If you float a bill that says: all global physicians who come from a very specific med school in India, have practiced in the UK, are at least six feet tall and right-handed, are welcome to enter the US? It would have my vote. But I am obviously biased.

Anyway, you needed up coaxing me into a more substantial reply than intended, so I hope both you and @DirtyWaterHotDog are happy. Still not everything I have to say, or could say, but it's something.

I think it's important to keep in mind (and this applies to the AI side of things also) that the U.S. is a weird country and that weirdness has benefits and costs.

One of the things that pops up is that the number of demands on U.S. physicians extends beyond beyond academic medical knowledge, this is one of the reasons that individuals with perfectly excellent medical knowledge sometimes don't survive the retraining process.

Doctors here have to survive the vagaries of our legal system, malpractice environment, U.S. patients (who are...different) and other factors. This requires both higher standards of behavior and other standards of behavior.

Much of this I would happily get rid of, but that's not usually the targets complainers want to go for. Tort reform would solve a lot of problems.

With respect to the physician shortage, it mostly isn't real.* If you live in a reasonably sized area you can get a PCP. You may need to look around, and you may have to establish care before you get sick. But the problem is that U.S. patients are demanding and don't want to think ahead and want instant gratification. If you live outside of a major area you may not be able to find a doctor but that's because of the allocation problem. Midlevels don't fix the allocation problem because they also don't want to move to those areas and then don't.

For specialists the situation is more complicated, some of them can't be replaced by midlevels, cant really increase training amounts, others have had shortages get worsened by midlevels (like cardiologists getting over consulted, same for neurology and some others). Others like psychiatry have a situation where people just stopped taking insurance and do cash pay because its more lucrative leading to a phantom shortage.

*Strong disconnect between the feel people have about this and the actual numbers.

I can't really argue against you, can I? It's been like 25 years and change since I was in the States, and would have been like 6 months if my visa didn't bounce. Perhaps American patients genuinely are more demanding and entitled, I can promise you I have seen more than my fair share of demanding and entitled patients in the UK (while the NHS is free, a doctor is not considered that different to a bus driver). I have seen demographic disasters/marvels that have simultaneously made me lose hope in humanity and rekindled my optimism for medical progress. Like seriously, how the fuck are some of these people still alive?

However, I am confident that there is a severe physician shortage when you leave urbanity. Several states, from memory including Texas and Mississippi, have quite recently massively relaxed the requirements for foreign doctors to come and practice as long as they go to underserved areas. They don't even need to have cleared the whole USMLE, let alone have completed a residency. You bet I follow that kind of stuff like my life depends on it, in a very meaningful sense, it does.

The fact that this happens at all, let alone in like 3 or 4 states (could be more) is suggestive of something. Of course the typical doctor wants to be in the Big City and will fight to go there. In India or the UK, where doctors are more abundant, the sheer competition will force some of them to take up less than ideal appointments, albeit not at major loss to earning potential.

I've worked in for-profit systems, in not-for-profit setups, and plenty in between. I would be rather surprised if the US was qualitatively different and not merely quantitatively so. I would happily take the hassle of insurance and the extra medicolegal liability if I could double or triple my income. I'll take the risk of getting shot or knifed. You guys have it very good, by global terms, at least when done with residency.

This is not really an argument that we should let every dude with half a frontal lobe practice medicine, or that doctors should be paid worse. But I think you heavily discount the risk posed by AI, both present and future. I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them. It's not going to do surgery for you today, or even next year, but many branches of our profession do not rely on procedural skills to make a living (I say, while crying that I have to do cannulations, catheterization or conduct ECGs because our nurses are useless).

I have had senior doctors, family and otherwise, try and flummox recent versions of ChatGPT. They can't pimp it. It will out-pimp them

Playing devil's advocate, have you seen a medium-IQ patient try and do so? I think this is what @Throwaway05 was alluding to - experts gather appropriate data and naturally pitch things a certain way. In a very meaningful sense, both the biological and silicon doctors have been trained on the same data and even when your family try to flummox GPT, they will unconsciously produce the kind of gotchas that broadly match the training set.

Now take your average patient. They may mis-state, misinterpret or lie about their symptoms. They may be panicked and exaggerate, or be tough guys and undersell. Standard LLMs have a tendency to trust their input sources too much, although you can train them out of this. LLMs also have a massively reduced amount of input compared to the amount of info you would get in even 5s of video, and they can panic - I have called an ambulance on the instructions of one and although that decision was certainly defensible it was objectively incorrect and I don't think a real doctor would have done so.

I would not expect even a custom-LLM to do nearly as well as a doctor-LLM centaur, and I know that tests found otherwise but those tests did not AFAIK deal with real-world end-to-end consultations.

Fortunately for me, computing is similar. I have experimented with agents etc. and they don't do good work left to themselves. I find AI massively helpful in my work but I'm confident my CEO couldn't get even close to the same results using the same tools.

They may mis-state, misinterpret or lie about their symptoms.

Yes this is going to be a hard problem, and self_made does give a counter argument, but you'll also see downstream problems that cause angst.

For instance, ADHD evaluations. Most people will say the right things and at the same time note really have adult onset ADHD. A psychiatrist gets to do this weird dance and figure out how often to bend. The LLM is either going to be confused and offer everyone stimulants or be rebelled against after it refuses anyone.

Anyone else with motivation issues will have the same problem - testosterone? Inappropriate antibiotics?

People who want conservative or aggressive management for something will rapidly figure out how to do so, and all kinds of other bullshit.

People have response variety and can make a variety of judgements.

As could true AI if we get that, but we dont have it.

I can't say that I've ever intentionally tried to get someone to do that experiment in front of me, but I have, on request, reviewed the advice given by various LLMs and haven't really seen any egregious errors in a longtime.

Poor information will degrade performance in anyone, including human physicians. Our usual approach is to assume honesty and then update towards the possibility that someone is lying or mistaken when further evidence comes in. Of course, it depends on what exactly someone says and how they present. If a clearly twitchy, disheveled guy shows up and claims to have never done alcohol or drugs, I would be rather suspicious. If you have herpetic lesions on your cock and claim you've never slept with anyone but your wife, I will not take your testimony entirely at face value.

I am obviously at an advantage asking ChatGPT questions, because I have at least a vague idea of the kind of information that is important to disclose even outside psychiatry, I know the kind of issues that would make a cardiologist or ophthalmologist sit up straight and lock in. Yet it will consistently ask me for clarification or for more information to narrow down the answer-space, and I have no reason to assume it wouldn't do this for a layperson. In fact, it might even be more thorough, because it might (correctly) assume that there's more room for error or misunderstanding in that scenario.

Right now, a Doctor-LLM cyborg is probably superior to either alone, at least in realistically messy scenarios. But even just the LLM will, at least with the exact same information and affordances, perform quite well. Most of the edge cases can be covered by the assistance of a mid-level or a junior doctor.

I can tell you that my bosses are too old to have learned to rely on AI as heavily as I do, so assuming they don't, I can match their performance while using it. Similarly, a smart med student might be able to match my performance if I was stripped of AI assistance while they got to use it. This gap is, as far as I can tell, becoming increasingly narrow with time.

With respect to AI, I'm sure it will get it done eventually, and I am sure it has tremendous pressure to do the job.

AI can do simple fact recall, it struggles much more to deal with a patient who tells you he fell and can't move his leg at all when really it's he won't move his leg due to pain. The AI will probably assume a neurologic deficit and trigger a work-up for that. A physician will poke the patient and see him move and assume traumatic injury. Someone needs to get clean data to give to the AI for outsourcing to work (for now).

And yes some states are doing alternative paths to practice, this is exploding in popularity and it's a solution to the allocation problem. It may even be a good one, it seems ineffective so far though because people can't get malpractice coverage or privileges without actual training - the liability risk is too high.

I am sorry to say this, but you are woefully underestimating the ability of AI.

I threw your hypothetical into ChatGPT, the paid version, and even before it finished thinking (it's doing a lot of thinking, and it all seems relevant) it immediately noted:

The big early split is whether he “can’t move it” because of pain/mechanical disruption, because the limb has lost perfusion or nerve function, or because the problem is actually central, such as spine or stroke. That distinction changes the whole pathway immediately.

Emphasis added.

In the full workup, it mentions:

“Is it too painful to move, or does it feel weak/dead/numb?

Seriously, try this for yourself. Get a paid subscription and try and find a clinical scenario where the evidence you have provided has a definitive answer, which the AI is unable to diagnose even after the same amount of effort a human clinician would devote.

Here's the full conversation, if you want to take a look:

https://chatgpt.com/share/69be7d62-ad60-800b-a335-bf527ee5168e

Someone needs to get clean data to give to the AI for outsourcing to work (for now).

AI can't do a lot of clinical work, because it lacks hands. But it can borrow someone else, say an NP or PA or just a nurse. And then it can do things that would otherwise take a human doctor.

I don't want to undermine our profession, but you have to understand that I value probity over professional solidarity, at least if challenged. When I do disclose how close we are to replacement, it's where it doesn't matter, I don't want to lose my job either, but I can rarely bring myself to mislead when I genuinely believe otherwise. I'm not accusing you of being misleading, by any means, but consider this example a data point that you're not considering how scarily good LLMs can be. We can take this to DMs if you prefer.

No I get they can do fancy things, but translating those in a real environment is HARD. We will get there eventually, but we can't get self-driving cars approved because of the liability and failures - medicine has just has much cost pressure to do so but as much contrary pressures (if not more because of people's demand for a person).

We also have a side game where LLMs are killing skills and knowledge development by outsourcing thinking (ex: in note writing). That will be a separate problem.

More comments

Interesting result. Out of curiosity, do you have a user-set preamble that you use as well as the base question? Generally when GPT starts using words like 'brutally practical' it's because the user has requested that.

More comments

we could have only the single most talented med student per year become a doctor, and have them train for 50 years so they can handle their own geriatric care right before the next candidate takes them off for MAID

I would love somewhere like North Korea to do this and proclaim with unabashed truth and sincerity that they have the best-trained doctor in the world. People from all over could come to gaze in awe at "The Doctor".

I have a pedigree of medical practice that stretches back 4 generations, probably longer if we're willing to accept qualifications that aren't formal. I would volunteer, as long as they give me a VPN and part of Kim's food budget. That is probably the best job security I can aspire to with AI breathing down my neck.