This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
With respect to AI, I'm sure it will get it done eventually, and I am sure it has tremendous pressure to do the job.
AI can do simple fact recall, it struggles much more to deal with a patient who tells you he fell and can't move his leg at all when really it's he won't move his leg due to pain. The AI will probably assume a neurologic deficit and trigger a work-up for that. A physician will poke the patient and see him move and assume traumatic injury. Someone needs to get clean data to give to the AI for outsourcing to work (for now).
And yes some states are doing alternative paths to practice, this is exploding in popularity and it's a solution to the allocation problem. It may even be a good one, it seems ineffective so far though because people can't get malpractice coverage or privileges without actual training - the liability risk is too high.
I am sorry to say this, but you are woefully underestimating the ability of AI.
I threw your hypothetical into ChatGPT, the paid version, and even before it finished thinking (it's doing a lot of thinking, and it all seems relevant) it immediately noted:
Emphasis added.
In the full workup, it mentions:
Seriously, try this for yourself. Get a paid subscription and try and find a clinical scenario where the evidence you have provided has a definitive answer, which the AI is unable to diagnose even after the same amount of effort a human clinician would devote.
Here's the full conversation, if you want to take a look:
https://chatgpt.com/share/69be7d62-ad60-800b-a335-bf527ee5168e
AI can't do a lot of clinical work, because it lacks hands. But it can borrow someone else, say an NP or PA or just a nurse. And then it can do things that would otherwise take a human doctor.
I don't want to undermine our profession, but you have to understand that I value probity over professional solidarity, at least if challenged. When I do disclose how close we are to replacement, it's where it doesn't matter, I don't want to lose my job either, but I can rarely bring myself to mislead when I genuinely believe otherwise. I'm not accusing you of being misleading, by any means, but consider this example a data point that you're not considering how scarily good LLMs can be. We can take this to DMs if you prefer.
No I get they can do fancy things, but translating those in a real environment is HARD. We will get there eventually, but we can't get self-driving cars approved because of the liability and failures - medicine has just has much cost pressure to do so but as much contrary pressures (if not more because of people's demand for a person).
We also have a side game where LLMs are killing skills and knowledge development by outsourcing thinking (ex: in note writing). That will be a separate problem.
It's 2026, the US has Waymos in Phoenix, SF, LA and slightly more limited availability in Austin, Atlanta, Miami, Dallas, Houston...
That is a lot of of people (50 to 60 million people nominally served). And that is today.
It's one thing to say self-driving isn't going to happen when they're puttering around in a DARPA course and knocking over half the cones. A whole different kettle of fish today, when a good fraction of your country can step out and order one through Uber.
It's like claiming we can't land humans on the moon after we have manned spaceflight and rovers on the surface. While we have hundreds of billions of dollars (or pretty much all the positive growth in the US stock market) and thousands of the world's smartest people, some of them drawing literal billion dollar salaries working diligently to make it happen, as is the case with AI.
I would not bet against it. Maybe it's more academic for you, you've probably had a decently long and well-remunerated career. On the other hand, I face a very real threat of being deported after outliving my welcome and then coming home to a ship that is simultaneously on fire and sinking. Oh well, at least I can probably make it through my psych residency before things get that bad, the ladder is rapidly ascending beneath me every time I look down. We can argue about timelines, it might take 5 years. It might take 10. It won't take 20, I'm confident of that much, and I really put a lot of thought into this.
I wish I am wrong, and you are right. But if wishing were horses, I'd have made PETA very mad by making a living life raft to the States already.
Sure self-driving cars are getting there, but they aren't fully in use yet, and the legal tests aren't all the way there yet.
AI may in fact replace everyone at some point, doctors have more physical work, patient interaction work, need to be a liability sponge, and so on than most other white collar work.
Yes the financial pressure to replace us is higher, but by the time they come for us in a serious way everyone else is gone too. Especially in psychiatry - you should have some safety there. More likely is an intractable increase in volume due to AI assistance.
In any case, even the finance people who love this shit are starting to push back against the way our economy is overweighted.
LLMs might just end up getting dropped as a boondoggle before they apply to too many use cases.
All fair points, but you have to consider that I'm pricing them in. I think that the regulatory and legal hurdles associated with replacing human doctors will buy us anywhere from 1-5 years once automation-induced unemployment has really taken off it. It's already started, but I'm talking >25% of the population being laid off and unable to find a job that pays nearly as well.
Even if it "merely" augment humans, the elasticity of demand in medicine is not literally infinite. A world where 90% of the doctors are laid off while 10% supervise mid-levels and oversee LLMs (both as troubleshooters and liability sponges) is almost as bad as 100% of us being laid off.
Once a large fraction of the population is unemployed and baying for relief, how long do you think governments can hold out and keep doctors on a pedestal? The AMA is not all-powerful either. It's even worse in the UK, the NHS is floundering, and Rishi Sunak wanted to keep us uppity doctors in check by replacing us with AI and midlevels years ago. They couldn't pull it off then, but it is an increasingly real possibility now. The very fact that doctors are rare and expensive in the US makes you excellent targets. You can't hold out forever. Once a single country or even a state bites the bullet, and succeeds (or doesn't crash and burn), there's going to be a domino effect.
Hell, even places where doctors are more abundant and cheaper aren't safe, mostly because AI is even cheaper and because they're poor countries without the luxury of swallowing as many systemic inefficiencies as the US system can tolerate. The relevant comparison is the delta in pay between the next lower rung of the ladder + a ChatGPT subscription vs a human doctor (or drastically fewer human doctors). That is a very large financial attractor, and barriers are not airtight or as robust as either of us would like.
Anyway, give it 5 more years and we'll see who's right. It's not like I think that being correct about this will change the trajectory of my life, I can only try and lessen the blow.
I mean if everybody's fucked everybody is fucked.
I'm not saying we are immune to getting fucked, we'll just be along with everyone else.
That should be a comfort!
"We're all dying, just some of us faster than others" is technically true, but I hope I die after a 80 year old woman with metastatic pancreatic cancer. Like, I take very little comfort in knowing almost everyone else is possibly screwed when that everyone else also includes me! We'll see how it all pans out, I am mostly functional despite my serious concern about what the future holds. I'm depressed because of far more prosaic and pragmatic reasons than AI. Good luck to the both of us, though one of us probably needs it a tad bit less.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Interesting result. Out of curiosity, do you have a user-set preamble that you use as well as the base question? Generally when GPT starts using words like 'brutally practical' it's because the user has requested that.
Not really. My personalization settings tell it I'm a psychiatry resident and a transhumanist. I have some stylistic instructions, such as to never do calculations without tool-calls, but nothing that should make it act in an unusual manner. There is nothing in recent chat history that should change this, I do have memory enabled. It's on the default tone and personality settings too.
Cool, just interested in how you use it.
My main thing is ‘don’t use search unless explicitly instructed’. Base knowledge almost always seems to be better than whatever slop it finds on the first page of Google.
Huh. I would expect the exact opposite, but I've left search on since pretty much it was an option and haven't checked. Can't say it's been an issue in practice.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link