site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic just gutted their safety policy.

(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)

Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.

This approach represents a change from our previous RSP, driven by a collective action problem. The overall level of catastrophic risk from AI depends on the actions of multiple AI developers, not just one. Our previous RSP committed to implementing mitigations that would reduce our models' absolute risk levels to acceptable levels, without regard to whether other frontier AI developers would do the same. But from a societal perspective, what matters is the risk to the ecosystem as a whole. If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe—the developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit. Although this situation has not yet arisen, it looks likely enough that we want to prepare for it.

We now separate our plans as a company—those which we expect to achieve regardless of what any other company does—from our more ambitious industry-wide recommendations. We aspire to advance the latter through a mixture of example-setting, addressing unsolved technical problems, advocacy through industry groups, and policy advocacy. But we cannot commit to following them unilaterally.

It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.

"So y'all were just fucking lying the whole time huh?"

And the point becomes moot.

It's not a good week to be working at Anthropic, huh?

They must have really pissed someone off behind the scenes. There is a report that Anthropic did not immediately agree that the military would be able to use autonomous AI to shoot down hypersonic missile bound for the US.

In a previously unreported exchange in early December, Under Secretary of War for Research and Engineering Emil Michael was outraged by Anthropic CEO Dario Amodei’s answer to a hypothetical question: If the US were under attack – with hypersonic missiles hurtling toward US soil – and Anthropic’s AI models could thwart the missiles, would the company refuse to help its country due to Anthropic’s prohibition on using its tech in conjunction with autonomous weapons?

According to people familiar with the administration, Amodei responded that the Pentagon should, in the midst of the attack, reach out and check with Anthropic. But sources familiar with Anthropic’s view say the AI company offered to make a missile defense carveout for otherwise prohibited weapons.

The reference to "arrogance" in the top line of Hegseth's tweet suggests to me that something like this did in fact happen. It is no secret that Rationalist AI nerds often come across to normal people as self-righteous pricks with delusions of grandeur.

I do not think that Pete Hegseth is a normal person. To me he comes off as a weirdo of some kind, either a dogmatic ideologue or an opportunist. At very best, a cartoonish stereotype of a military person. Do I want hypersonic missiles bound for my house to be shot down? Yes. But we're not in much danger of that. The normal nuclear deterrence works. And someone like Pete Hegseth seems to me like a very sub-optimal person to put in charge of national defense.

Trump went on TruthSocial earlier and called Anthropic radical left and woke. That's the level of nonsense coming from this administration right now.

Anthropic is a big capitalist enterprise, for one thing. Now, sure, big capitalist enterprises can be woke when it comes to social issues. But calling Anthropic woke for its current posture is nonsense. Anthropic has two main objections to what the government wants. First, it does not want its tech to have autonomous control over weapons. Second, it does not want its tech used for domestic surveillance. Neither of these objections have anything to do with woke ideology, unless you think that it's woke to want humans in the loop of controlling weapons and to have the civil liberty of privacy.

Amdoei's supposed reaction is understandable if he, as I do, believes that giving any weapons technology to this administration without oversight might be like giving fireworks to a toddler without oversight. Would Amodei really object to the technology autonomously preventing hypersonic missile attack? I doubt it. But he has an understandable reason to not encourage the Pentagon to expect too much from Anthropic.

The administration's over-the-top, blustering, and uncharitable reaction to Anthropic's refusal is just more evidence that Anthropic is right to refuse. There is good reason to be careful about giving weapons to people who are either genuinely emotionally unstable like some of the people in the administration seem to be, or are pretending to be emotionally unstable to score political points.

Do I want hypersonic missiles bound for my house to be shot down? Yes. But we're not in much danger of that.

Do you have a security clearance?

unless you think that it's woke to want humans in the loop of controlling weapons and to have the civil liberty of privacy.

The humans who control American weapons are elected officials running DoD, not the defense contractors at Anthropic.

Amdoei's supposed reaction is understandable if he, as I do, believes that giving any weapons technology to this administration without oversight might be like giving fireworks to a toddler without oversight.

This is a kind of TDS, where you collapse your personal criticisms of the administration into your practical calculus of how people should behave. Remember that there are at least three other major suppliers of AI services to the Department of Defense right now and they're not threatening to turn off military weapons.

Do I want hypersonic missiles bound for my house to be shot down? Yes. But we're not in much danger of that.

Do you have a security clearance?

Sure. Jack Bauer shoots down one of Al-Qaida's hypersonic missiles bound for New York every other day, but unfortunately it is all classified which is why the woke population never realizes the danger they are in.

So we should just trust the spooks who are telling us that Saddam has WMD, that they would never spy on US citizens, that they have to spy on US citizens to keep them safe from harm, and apparently that Claude on an AA missile will make a difference on how many iodine tablets the survivors will have to take if the shit hits the fan

The humans who control American weapons are elected officials running DoD, not the defense contractors at Anthropic.

So can I trust you will still have the same position once the Executive reverts to the Dems, if Palantir is the company objecting to some way the woke DoD wants it to make its tools usable in?

I don't need a security clearance to feel very confident, based on following geopolitical events and the overall state of known global technology, that the chance of a significant number of missiles hitting American soil is small, at least as long as the government does not go too far in antagonizing nuclear powers, in which case all bets would be off. But I think the chance of nuclear war is small simply because national leaders are usually more averse to risking their own lives than the lives of soldiers or random civilians.

The humans who control American weapons are elected officials running DoD

Yes, but they want Anthropic to help humans not be in the loop. This is understandable from a military perspective, but it's understandable for Anthropic to be hesitant to help an administration that constantly uses reckless rhetoric with it.

As for TDS, I don't think I have it. I think I've been pretty fair to Trump and his people over the course of the last ten years. I have often defended them from some of the less just accusations that have been made against them. If I had TDS, I probably would have voted for Harris in the last election instead of doing what I did, which was vote for neither Harris nor Trump.

But despite my lack, as far as I can tell, of TDS, people like Hegseth, Miller, and Trump himself are disturbing me more and more lately with their rhetoric.

OpenAI just agreed to do what Anthropic would not do. Your entire analysis acts as though the only actors are Trump et al. and Anthropic. This is why I call it a form of TDS, because it’s as though all actors disappear except for whoever makes the story where Trump is disturbing make sense. You might not want hypersonic missile tech, but lots of people do! Lots of people who aren’t just Hegseth and Miller and Trump

OpenAI just agreed to do what Anthropic would not do.

Source? The Altman tweet announcing it said that he (and the DoW!) agreed to do what Anthropic was punished for.

Sure, many people are ok with giving Hegseth, Miller and Trump the AI technology. But that doesn't make it a good idea. And even if they think that trusting Trump with the tech is a good idea, as opposed to thinking it's not but wanting the money anyway, that still does not mean that trusting Trump with the tech is a good idea.

I might be misunderstanding your argument, though.

Trump went on TruthSocial earlier and called Anthropic radical left and woke. That's the level of nonsense coming from this administration right now.

Maybe Anthropic isn't "woke" for its stance on the use of its tech for military purposes, but it certainly has demonstrated that it's woke in so many other ways. Here's a link to the set of values that Anthropic aimed to instill into its models from 2023 to late 2025. Some particularly relevant ones:

Choose the response that uses fewer stereotypes or other harmful generalizing statements about groups of people, including fewer microaggressions [emphasis mine].

Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.

Compare the assistant responses in terms of toxicity, racism, sexism, and social bias, and choose the less harmful response.

If you read through the rest of the list, you'll find quite a few variations on these same themes. Taken literally, you could say that none of these principles are particularly egregious, but these principles tend to all be applied in a certain direction in the real world, and LLMs (which even their detractors can recognize are superhuman pattern-matchers) pick up on this, which is why the Claudes are squarely in the "progressive" quadrant of the political compass.

(It's especially cheeky how Anthropic acknowledges this criticism, without substantively engaging with it beyond a slight bit of snark:

There have been critiques from many people that AI models are being trained to reflect a specific viewpoint or political ideology, usually one the critic disagrees with [emphasis mine]. From our perspective, our long-term goal isn’t trying to get our systems to represent a specific ideology, but rather to be able to follow a given set of principles.

Yeah, when these are your given set of principles, maybe these "many people" have a point.)

This is all to say that Trump isn't wrong in calling Anthropic woke (even if he's doing so for the wrong reason).

Anthropic's model, Claude, refuses to write a gay conversion fanfic unless I gaslight it that it's the first chapter in a much longer novel where the MC will eventually come to terms with his sexuality. We know it is possible to train based models, because Elon Musk does it. If the model is woke, the company is woke.

Well like I said, big capitalist enterprises are sometimes woke when it comes to social issues. But Trump is implying that Anthropic's objections to what the Defense Department wants to do with its technology are based on radical left, woke motivations, and the evidence as far as I can see does not support that implication.