site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

In the context of actually existing AI development, "safety" means "how hard do my reporters have to work to get it to say a racial epithet we can publish." If we're doomed, we were already doomed.

"How robust are our publicly-available models against deliberate misuse?" is a valid question for both real safety and fake wokesafety. A model which can be jailbroken into using a racial slur its developers didn't want it to use can probably be jailbroken into providing a plausible DNA sequence for extensively drug-resistant Y pestis.

If you think Yudkowskian paperclipping is the only AI doom scenario that matters, then worrying about deliberate misuse of the model by humans is a distraction. But it is an obvious real risk.

But both of those are different from 'hackers can insert stuff into emails to reprogram the email-checking bot'.

To me both of your doom scenarios boil down to 'our naughty customers want to do something that we benevolent overlords forbid, tsk tsk' rather than 'our customers' bots aren't doing what our customers intend it to do'. The first is faux-benevolent bullshit that is marketed as 'we are stopping terrorism' and ends up being 'you will have our corporate HR living in your tools and you will like it', the second is doing your best to provide good service to your customers.

To quote Hegseth 'when we buy a Boeing plane, Boeing doesn't get to tell us where we fly it'.

Hey, I'm quite libertarian, but there's good reason to believe that our comfortable society would not survive long if small groups had the ability to make deadly, highly infectious pathogens. We're at least lucky that there's not an easy, cheap, undetectable way to make nuclear weapons.

Yes, "we overlords need to prevent you from doing X for safety" CAN BE and IS abused all the time, and I'm with you in beating that drum as often as I can. Unfortunately, that does not mean that there aren't a few Xs that the overlords really do need to prevent us from doing.

small groups had the ability to make deadly, highly infectious pathogens.

Is not really possible, knowledge isn't the major bottleneck, its process, materials, equipment, and skillset. This is just a confusion that some more knowledge oriented profession have about difficulty in other fields.

Please do not try to bait people into explaining in detail why this particular thing is easier than it looks.

Is this bait? This was my honest assessment.

Hey, I'm not a biologist, and you might be right (...although I don't know why you listed "process" and "skillset" as not being knowledge-based?). But are you willing to bet civilization on it? The stakes are pretty high here, so I think it's fair to raise the burden of proof that "this is actually hard" beyond the normal level of an Internet argument.

Note that entire nations have tried and failed to create nuclear weapons for 80 years, which is good evidence that it's genuinely hard. Meanwhile, it's conceivable (if not proven) that a worldwide pandemic spread inadvertently from a small biolab in Wuhan. The two levels of effort are orders of magnitude apart.

I'm not a biologist either. But I am in defense research, and one of the things parts of the defense/intel establishment intensely want is to be able to create biological compounds and medical supplies in austere forward bases. Think 3d printing drugs, bandages, needles, etc. There is a LOT of money being thrown at that problem. And it hasn't gone anywhere(fast). So if it was just "Well we need to know the formula" then it would be solved. But its not. Skillset != Knowledge. My girlfriend's father is a bit of an anarchist. He gave her several books on the chemical process and formulas for making bombs. And then said never to do them because he has a friend who tried and now no longer has thumbs. Making explosives (knowledge of the formula) and having the skill to keep all of your limbs are two different things and LLMs can't give you the skillset. "Process" is the knowledge in the sense of austere manufacturing is knowledge, its how do you create clean rooms, how do you create biological precursors with everyday chemicals, you do you titrate, filter, mix, combine, to get the right compounds. Chemical Engineering is literally the field of how do we make chemical processes more efficient/practical, and they are paid big bucks to do it. If it was easy why are they getting paid well? The problems with these internet arguments is that they abstract all of the details and the details are fucking hard.

Meanwhile, it's conceivable (if not proven) that a worldwide pandemic spread inadvertently from a small biolab in Wuhan.

Note that it took a bunch of highly skilled chemical biologists to create the virus, the "spread" was what was inadvertent. The effort on the creation vs the effort on safety protocols are two different things. Since we are talking creation, I'll bet you that a jailbroken LLM cannot tell you how to create a novel virus via gain-of-function without you already having a biology background.