site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.

For those not following along:

Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.

The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.

The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.

Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter

Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.

Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.

Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.

They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.

Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".

There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.

Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?

Goddammit. It's only February. I'm tired, boss. I just find it very funny that:

WSJ Exclusive: Federal officials have raised alarm about the safety and reliability of xAI’s Grok chat bot

Really funny how Elon immediately offered up grok for autonomous kill bots and the pentagon was like “hahahaha are you insane?”

Ideally, governments should not have companies they like or dislike. (They still can have an independent anti-trust commission which can split up monopolies, though.)

In the US, the relationship between big corporations and the government envisioned by both sides of the aisle is the same as in fascism -- companies enjoy some autonomy and can make money for their shareholders, but if the Fuehrer tells them to build tanks, they know that they are not at liberty to respectfully decline and build cars instead. Seen with the Democrats leaning on the social media companies to suppress COVID misinformation (later extended to general 'misinformation'), the TikTok law, to the pathetic display of the heads of SV kissing the ring of the Don when he took office last year and his blatant favoritism.

So Hegseth retaliating against a company who dares to have (quite modest, to be honest) ethical red lines is in a long tradition of corporations being told what to do lest they receive a broadside from regulatory authorities.


For Anthropic, this is a costly signal. While I am reasonably confident that the courts will stop this government overreach eventually, the court system recently had this thing were they would let government decisions play out for a year before saying "haha, obviously not".

It also makes me slightly more confident in Anthropic doing the right thing in general. Obviously they took some hits over revisioning their Responsible Scaling Policy earlier that week. My personal take is that at least Anthropic cares somewhat about alignment. Contrast with OpenAI after Altman's coup, or Meta (whose director of alignment only makes the news when she gets OpenClaw to delete her inbox) or xAI (whose goal seems to be to build the AI which undressed most minors before becoming a paperclip maximizer).

Of course, Anthropic is also signaling that they are not Trump-aligned, which may be helpful in three years. OTOH, Democrats also want a military contractor to jump when told to jump, and their red lines did not even mention vulnerable minorities, so I am unsure how much goodwill this will buy them.

I am also unsure how this will matter for their day-to-day operations, my understanding is that AI companies are burning through vast amounts of investor cash in order to train the next money which will win the AI race and pay for itself a thousandfold, which seems almost as viable if you do not have government contractors as customers.


For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

Of course, in the hole I am living in, the latest hearsay news is that Claude is the best LLM for writing code. Not sure how the gap to their competitors compares to the juicy gravy train of fat DoD contracts, though.

So one way to spin this (depending on how you lean wrt AI coding) would be "Hegseth weakens US military by denying them the best tool for the job", which from an European perspective does not really sound like a bad thing.

If I had to name the company I'd like to see pull-off ASI, I'd absolutely go for Anthropic. I agree that they take alignment very seriously, and while I do not agree with all the moral takes they've tried to instill into Claude via its Constitution, it's remarkably sane nonetheless. I'm not an EA, I don't give a hoot about shrimp welfare, I'm ambivalent about model welfare, but I'll be damned if I see a better alternative. I mirror your take on OAI, XAI and Meta. Google? I'm unsure. Perhaps better than those three.

Amanda Askell clearly strikes me as being one of the few philosophers who genuinely deserves being the godmother of an AGI. Maybe Scott could do better, if I absolutely had to name alternatives.

For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

I'm no expert, but my impression is that the DOD wants to go with the maximalist interpretation, while Anthropic wants to both dismiss charges, or in the event it sticks, get away with a narrow interpretation.