site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.

For those not following along:

Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.

The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.

The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.

Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter

Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.

Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.

Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.

They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.

Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".

There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.

Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?

Goddammit. It's only February. I'm tired, boss. I just find it very funny that:

WSJ Exclusive: Federal officials have raised alarm about the safety and reliability of xAI’s Grok chat bot

Really funny how Elon immediately offered up grok for autonomous kill bots and the pentagon was like “hahahaha are you insane?”

It's kinda a mess. On one hand, the US military as a policy doesn't like contractors putting conditions on use of material. That's not the hard-rule that they want to pretend it is, as anyone that's remotely familiar with a leased military base can tell you, but it's also not something made up for this one exercise.

On the other hand, this is one of those technologies that's unusually dangerous in unobvious ways. A guy that makes missiles doesn't have to get contractual assurances that Schmuck A isn't intended to shoot them into a busload of American orphans, because if they were going to do that no contract would stop them. Trying to use an LLM for hypersonic missile defense is, presumably, not obviously batshit insane, and would easily be plumbing new depths of stupid ways to start WWIII just because someone thought the temperature value needed to go up a bit higher.

On the gripping hand, there's particular reasons to be skeptical of Anthropic, here. Their position and the nature of the technology gives it unique capability to check for compliance, and while I don't think the company would blow up a massive contract just to get a short-lived news cycle falsely claiming Republicans were doing something awful, I absolutely think individual employees would. Even outside of the politics, leaving interpretation of where an 'autonomous lethal system' begins and human-assist ends, or where 'mass domestic surveillance' begins and 'a test of any sensor system ever' ends, and whatever favorable Californian court hearing Anthropic could bring is... not a pleasant consideration. There's a more cynical take where laws prohibiting a behavior don't real where governments don't want them to, while contract requirements could, but it runs face-first into Anthropic not being particularly focused on the money, and that's about all you could recover.

On the other gripping hand, there's a lot of reasons that Anthropic is skeptical of the military (and intelligence) sectors, here. Those legal constraints have turned to anarchotyranny already, where they mean require thirty levels of approval for a data collection that's never going to be read and will be deleted, but the NSA has their warehouse and a lot of very long gloves.

On yet another side, there's a problem where supply chain issues are Big Problems when they involve anyone this distributed. I'm not even in the military, and I've been pretty badly screwed over by a fuel vendor deciding that they just Weren't Really Feeling It before. The possibility that someone might cut off translation and transcription services can get people killed if they're in the air and dependent on them. Even if this disagreement was focused on something where I might sympathize with Anthropic on, it's a major warning shot to a government organization based around not getting warning shot.

But it's also both unprecedented and very rapid escalation.

Indeed, and I think you've touched upon but merits more depth: how one operationalizes compliance with contract restrictions.

Certainly I don't think the DOW can abide a contractor not just having conditions (which may or may not be objectionable depending on their substance) but on the assertion that this contractor itself gets to decide on matter and cut off support on the fly seems like a bridge too far.

Searching in vain for deescalation here, one hopes the parties could come to an understanding where the substantive restrictions are acknowledged without creating a procedural veto for the contractor.