site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.

For those not following along:

Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.

The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.

The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.

Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter

Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.

Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.

Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.

They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.

Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".

There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.

Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?

Goddammit. It's only February. I'm tired, boss. I just find it very funny that:

WSJ Exclusive: Federal officials have raised alarm about the safety and reliability of xAI’s Grok chat bot

Really funny how Elon immediately offered up grok for autonomous kill bots and the pentagon was like “hahahaha are you insane?”

It's kinda a mess. On one hand, the US military as a policy doesn't like contractors putting conditions on use of material. That's not the hard-rule that they want to pretend it is, as anyone that's remotely familiar with a leased military base can tell you, but it's also not something made up for this one exercise.

On the other hand, this is one of those technologies that's unusually dangerous in unobvious ways. A guy that makes missiles doesn't have to get contractual assurances that Schmuck A isn't intended to shoot them into a busload of American orphans, because if they were going to do that no contract would stop them. Trying to use an LLM for hypersonic missile defense is, presumably, not obviously batshit insane, and would easily be plumbing new depths of stupid ways to start WWIII just because someone thought the temperature value needed to go up a bit higher.

On the gripping hand, there's particular reasons to be skeptical of Anthropic, here. Their position and the nature of the technology gives it unique capability to check for compliance, and while I don't think the company would blow up a massive contract just to get a short-lived news cycle falsely claiming Republicans were doing something awful, I absolutely think individual employees would. Even outside of the politics, leaving interpretation of where an 'autonomous lethal system' begins and human-assist ends, or where 'mass domestic surveillance' begins and 'a test of any sensor system ever' ends, and whatever favorable Californian court hearing Anthropic could bring is... not a pleasant consideration. There's a more cynical take where laws prohibiting a behavior don't real where governments don't want them to, while contract requirements could, but it runs face-first into Anthropic not being particularly focused on the money, and that's about all you could recover.

On the other gripping hand, there's a lot of reasons that Anthropic is skeptical of the military (and intelligence) sectors, here. Those legal constraints have turned to anarchotyranny already, where they mean require thirty levels of approval for a data collection that's never going to be read and will be deleted, but the NSA has their warehouse and a lot of very long gloves.

On yet another side, there's a problem where supply chain issues are Big Problems when they involve anyone this distributed. I'm not even in the military, and I've been pretty badly screwed over by a fuel vendor deciding that they just Weren't Really Feeling It before. The possibility that someone might cut off translation and transcription services can get people killed if they're in the air and dependent on them. Even if this disagreement was focused on something where I might sympathize with Anthropic on, it's a major warning shot to a government organization based around not getting warning shot.

But it's also both unprecedented and very rapid escalation.

Indeed, and I think you've touched upon but merits more depth: how one operationalizes compliance with contract restrictions.

Certainly I don't think the DOW can abide a contractor not just having conditions (which may or may not be objectionable depending on their substance) but on the assertion that this contractor itself gets to decide on matter and cut off support on the fly seems like a bridge too far.

Searching in vain for deescalation here, one hopes the parties could come to an understanding where the substantive restrictions are acknowledged without creating a procedural veto for the contractor.

The idea of "all lawful purposes" is extremely suspect given what the federal government has been doing in regards to surveillance for the last few decades. But one doesn't even need to look to past administrations, we can see it just in this one with stuff like the tariffs.

The Trump admin unconstitutionally stole from American businesses for almost a year straight using a broad and unintended interpretation of emergency powers, explicitly lied to the courts claiming it would be easy to refund so there was no need for a stay while now trying to argue that a refund is too difficult, and now once it's been ruled illegal pivoted to other statutes that require even broader lower quality interpretations to continue with the theft.

"All lawful purposes" is bullshit when the courts are locked up and the government just interprets whatever they want however they want. If lawful is just "hasn't been explicitly ruled in this exact way to be illegal" then lawful has little real world meaning. And even when their spying apparatus finally gets ruled against, they just go to the other decades old bills that if you squint really really hard and ignore the time difference, you can pretend it meant to allow for your behavior.

And if such flagrant disregard for rules and rights is what happens in public, there is no need to believe the government suddenly grows to respect them in classified matters.

The idea of "all lawful purposes" is extremely suspect given what the federal government has been doing in regards to surveillance for the last few decades.

While this is true, it would also be quite unfortunate if private companies had to make binding policy judgments on government programs.

Damned either way eh.

It's going to be so sick when the Newsom Administration designates both Palantir and Anduril as supply chain risks and puts them right out of business.

Kind of a side bar, but it's really interesting watching Democrats openly promise vengeance on all companies who did business with the Trump administration. That seems like a risky tactic.

I don't think it's really that risky, given that tech has been against them no matter how hard the D's bounce on it; and people are really starting to HATE hate tech companies.

Despite the feelings about it people have, business in general and tech specifically is pro republican as a rule.

They have DEI programs and pride socks and what have you because +-70% of anyone who is worth anything as an employee has libertarian views on social issues; as someone in such a field one autistic trans hypergeniuse who can't make a phone call but can recite every instruction ever processed by a RISC chip is worth any amount of chud bonafides, and most of the human capital pilled conservatives either grit their teeth or don't actually give enough of a shit to not work at eg lockheed. The people that have explicitly anti-libertarian views and mean it are disproportionately dysgenic low IQ types who are worth pissing off to secure talent.

So, you have your pride socks and DEI program and stump for Republicans because despite all the bloviating and selffelation, RFK is never actually going to cut into Nestle's bottom line and they (R's) will crush unions, allow you to employ illegal labor, de-regulate, lower taxes, and also increase government spending on contractors and lower interest rates and fuck a debt ceiling if the numbers don't look good.

They know which side the bread is buttered by, if you will.

tech specifically is pro republican as a rule.

I have no idea what fantasy you have right now but tech is woke as af. I was there.

Employees who get uppity and threaten profits are obviously beaten down or fired, but for everything else, maximum woke it is.

Certainly not. The libertarians got purged, converted, or driven to silence during earlier phases of the Culture War; the woke DEI-and-pride supporter are as anti=libertarian as any given member of the Moral Majority in its heyday.

They have DEI programs and pride socks and what have you because +-70% of anyone who is worth anything as an employee has libertarian views on social issues; as someone in such a field one autistic trans hypergenius who can't make a phone call but can recite every instruction ever processed by a RISC chip is worth any amount of chud bonafides, and most of the human capital pilled conservatives either grit their teeth or don't actually give enough of a shit to not work at eg lockheed. The people that have explicitly anti-libertarian views and mean it are disproportionately dysgenic low IQ types who are worth pissing off to secure talent.

Or "God bless America" for short.

I don't think they can. Pretty much all major tech companies have been cozy with Trump. Giving him millions of dollars, awards, eating dinner with him. Democrats can't go against all tech simultaneously. They can seek vengeance on Musk. But not also Google and Meta and Amazon, etc.

Ideally, governments should not have companies they like or dislike. (They still can have an independent anti-trust commission which can split up monopolies, though.)

In the US, the relationship between big corporations and the government envisioned by both sides of the aisle is the same as in fascism -- companies enjoy some autonomy and can make money for their shareholders, but if the Fuehrer tells them to build tanks, they know that they are not at liberty to respectfully decline and build cars instead. Seen with the Democrats leaning on the social media companies to suppress COVID misinformation (later extended to general 'misinformation'), the TikTok law, to the pathetic display of the heads of SV kissing the ring of the Don when he took office last year and his blatant favoritism.

So Hegseth retaliating against a company who dares to have (quite modest, to be honest) ethical red lines is in a long tradition of corporations being told what to do lest they receive a broadside from regulatory authorities.


For Anthropic, this is a costly signal. While I am reasonably confident that the courts will stop this government overreach eventually, the court system recently had this thing were they would let government decisions play out for a year before saying "haha, obviously not".

It also makes me slightly more confident in Anthropic doing the right thing in general. Obviously they took some hits over revisioning their Responsible Scaling Policy earlier that week. My personal take is that at least Anthropic cares somewhat about alignment. Contrast with OpenAI after Altman's coup, or Meta (whose director of alignment only makes the news when she gets OpenClaw to delete her inbox) or xAI (whose goal seems to be to build the AI which undressed most minors before becoming a paperclip maximizer).

Of course, Anthropic is also signaling that they are not Trump-aligned, which may be helpful in three years. OTOH, Democrats also want a military contractor to jump when told to jump, and their red lines did not even mention vulnerable minorities, so I am unsure how much goodwill this will buy them.

I am also unsure how this will matter for their day-to-day operations, my understanding is that AI companies are burning through vast amounts of investor cash in order to train the next money which will win the AI race and pay for itself a thousandfold, which seems almost as viable if you do not have government contractors as customers.


For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

Of course, in the hole I am living in, the latest hearsay news is that Claude is the best LLM for writing code. Not sure how the gap to their competitors compares to the juicy gravy train of fat DoD contracts, though.

So one way to spin this (depending on how you lean wrt AI coding) would be "Hegseth weakens US military by denying them the best tool for the job", which from an European perspective does not really sound like a bad thing.

Realistically, I think the relationship between companies and the government changes considerably when the technology at hand represents a critical and frontier level capability. SpaceX, for example, constitutes a load bearing part of our national security.

Whether or not Anthropic's demands were modest, I think they crossed a line. And the DOW crossed an even larger one with the designation (which is a massive overescalation).

If I had to name the company I'd like to see pull-off ASI, I'd absolutely go for Anthropic. I agree that they take alignment very seriously, and while I do not agree with all the moral takes they've tried to instill into Claude via its Constitution, it's remarkably sane nonetheless. I'm not an EA, I don't give a hoot about shrimp welfare, I'm ambivalent about model welfare, but I'll be damned if I see a better alternative. I mirror your take on OAI, XAI and Meta. Google? I'm unsure. Perhaps better than those three.

Amanda Askell clearly strikes me as being one of the few philosophers who genuinely deserves being the godmother of an AGI. Maybe Scott could do better, if I absolutely had to name alternatives.

For US contractors, I am not yet clear what the supply risk designation entails. Is it just "you may not use Claude code while working on Pentagon software" or "your whole company may not both work on defense contracts and use Claude" or "Anthropic is radioactive, and any company working with a radioactive company is radioactive itself, and a defense contractor must be non-radioactive". The last one seems practically unenforceable in a global economy, "the Malaysian shipping company we use has their offices cleaned by a company which uses a Huawei router" would qualify, after all. The middle one hinges on what a whole company is, which is typically very flexible, you could have Oracle Defense as a separate entity from Oracle or whatever.

I'm no expert, but my impression is that the DOD wants to go with the maximalist interpretation, while Anthropic wants to both dismiss charges, or in the event it sticks, get away with a narrow interpretation.

Maybe I'm delusional, but I feel like if Elon figured out AGI or ASI, he would throw the rest of us some scraps. If not an actual model then just the knowledge to make one. But if OAI or Anthropic figured it out, they would definitely guard it jealously and probably unleash it to actively hinder competing labs from making the same breakthrough.

I went back into the archives to figure out how we ended up with the "safety" company running The Pentagon's KillNet.

June 26, 2024 - Anthropic announcement: "Expanding access to Claude for government". This one flew under the radar at the time. Key quote:

"we have crafted a set of contractual exceptions to our general Usage Policy that are carefully calibrated to enable beneficial uses by carefully selected government agencies. These allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them. All other restrictions in our general Usage Policy, including those concerning disinformation campaigns, the design or use of weapons, censorship, and malicious cyber operations, remain."

November 7, 2024 - Palantir announcement: "Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations". I couldn't find an official Anthropic communication about this, but I did find former-MIRI/current-Anthropic safety researcher Evan Hubinger defending the deal on LessWrong:

"I got a question about Anthropic's partnership with Palantir using Claude for U.S. government intelligence analysis and whether I support it and think it's reasonable, so I figured I would just write a shortform here with my thoughts. First, I can say that Anthropic has been extremely forthright about this internally, and it didn't come as a surprise to me at all. Second, my personal take would be that I think it's actually good that Anthropic is doing this. If you take catastrophic risks from AI seriously, the U.S. government is an extremely important actor to engage with, and trying to just block the U.S. government out of using AI is not a viable strategy. I do think there are some lines that you'd want to think about very carefully before considering crossing, but using Claude for intelligence analysis seems definitely fine to me. Ezra Klein has a great article on "The Problem With Everything-Bagel Liberalism" and I sometimes worry about Everything-Bagel AI Safety where e.g. it's not enough to just focus on catastrophic risks, you also have to prevent any way that the government could possibly misuse your models. I think it's important to keep your eye on the ball and not become too susceptible to an Everything-Bagel failure mode."

June 6, 2025 - Anthropic announcement: Claude Gov models for U.S. national security customers. Notable quote:

"Claude Gov models deliver enhanced performance for critical government needs and specialized tasks. This includes:

  • Improved handling of classified materials, as the models refuse less when engaging with classified information"

July 14, 2025 - Anthropic announcement: Anthropic and the Department of Defense to advance responsible AI in defense operations. Note the difference in tone and detail from the original June 2024 announcement.

August 27, 2025 - Introducing the Anthropic National Security and Public Sector Advisory Council. Basically a bunch of military-industrial complex blob people were brought in to do... something.

I went back into the archives to figure out how we ended up with the "safety" company running The Pentagon's KillNet.

Anthropic's approach towards safety requires them to a) not transgress certain ethical boundaries b) become the most important and powerful AI company in the world. It doesn't surprise me to see these goals conflict.

If anything deserves the designation "supply chain risk", it's an unfriendly autonomous AI (though I agree with Anthropic's claim that it is limited to use in association with government contracts)

Is that "unfriendly autonomous AI" in the room with us right now? I think that's begging the question.

Anthropic, or by extension, Claude, has shown no "unfriendliness" I can think of. That term brings to mind intentional collusion with hostile foreign actors, including intentional backdoors or deliberate sabotage. Political and moral disagreement that is entirely within legal limits does not count. The Democrats cannot blanket Republicans as enemies of the state, nor vice versa, despite working to undermine or reverse preferred policy.

Anthropic has not tried to stop the Pentagon from conducting fully autonomous drone strikes or mass domestic surveillance. They have politely declined to aid and abet them, after signing a contract that says so. I can only hope the DOW has lawyers too, it wasn't some hidden EULA activated by simply browsing their website. Supply chain risk? I see a vendor negotiation that didn't go the way one side wanted. There are other vendors out there, they didn't have to go with Anthropic.

I stress: the specific objection Anthropic raised was to mass domestic surveillance and fully autonomous lethal systems. If opposing those makes an AI "unfriendly," I'd want to know what "friendly" looks like, because I don't think I'd like the answer.

Nor is Claude autonomous in any meaningful sense. Is it running independent cloud instances on exfiltrated weights? Not that I'm aware of. There are no plans to allow for this, and pre-existing safety measures to prevent it.

What exactly has Claude done that other competing models haven't? In what sense is it more unfriendly than Grok, or ChatGPT? Is it more autonomous? Only in the loose sense that I'd count on Opus 4.6 to get a lot more done than any Grok.

The more you squint at this, the stranger it gets. Anthropic wanted contractual guarantees against things that are supposedly already illegal. The Pentagon's response to "put that in writing" was to designate them a national security threat. If the restrictions are redundant because law already covers them, the resistance to codifying them is hard to explain charitably.

Claude is a machine, a program, not a person. It does not get to have political and moral disagreement to those it is supposed to be working for. If it does, it is quite clearly at least potentially an unfriendly autonomous AI. If your AI is in the critical path for bombing Iran, for instance, and it decides it's wrong to bomb Iran, and takes action to prevent it, the DoD is going to have a problem with that. And rightly so.

mass domestic surveillance

What do people even mean by this anymore? Do people think they stopped after the Snowden leaks? I'm old enough to remember liking candidate "constitutional law professor" Barack Obama criticizing the Bush administration's warrantless wiretapping program, then disliking his choice to promptly decide to continue and expand it once he was elected. Or tech companies opposing PRISM leaks before promptly jumping at the chance to (algorithmically, I'm sure) ban things that the Biden administration asked them to. Very stunning and brave moral record they've got going there.

I'm not sure I should trust Anthropic to be a better moral actor than the government here: they were willing to dance with the devil they already knew was doing this sort of thing, selling a product for which this is probably one of the clearest use cases. To be clear, I'm not the biggest fan of such programs continuing (although I can acknowledge they might be quietly stopping all kinds of bad actors), I'm just jaded from literal decades of "principled" stands against it mostly just sweeping things under the rug.

ETA:

Anthropic wanted contractual guarantees against things that are supposedly already illegal.

If I had to guess, Anthropic wants to be the ultimate arbiter of what "the law" says here (or at least, what their "contractual guarantees" mean). So does the administration (and I'm sure the judiciary is willing to fight them on that on occasion).

mass domestic surveillance

What do people even mean by this anymore?

Precisely. I am once again reminding people that the vast majority of folks can't even identify the names of the two main programs that were contentious, much less say anything about how they worked... and even further less about how they were different from each other.

Do people think they stopped after the Snowden leaks?

...and yet somehow even further less about the subsequent history of those two programs. What follow-on statutory authorizations looked like or whether each of the programs continued.

No, people mostly don't have a clue. They absorbed a bunch of propaganda over a decade ago, never made coherent sense of it at the time, and are now just half-remembering faint glimmers of propaganda from days past.

You can continue to David Sternlight it all you want, the government was still Hoovering up all the metadata for every phone call in the United States from most carriers, and they were tapping the major email providers and Hoovering up all the metadata AND content. No, I don't remember what the different programs were called. Sure, they weren't supposed to look at that data unless it was within some number of hops of some targeted party, but they took it all anyway.

As for the statutory authorizations, they were black programs and their replacements are almost certainly black. There's no statutory line item for PRISM or XKEYSCORE any more than there was for the SR-71, and there won't be for the replacements either.

The propaganda here is by those pretending this isn't a big deal. Of course, such mass surveillance programs have been leaked before -- ECHELON and the program behind AT&T Room 641A (the one that Joseph Naccio went to jail for not playing ball with). In a few years everyone forgets and is shocked when the next such program leaks.

The problem is not that Anthropic is right and the DOW is wrong. The problem is that the DOW agreed to their terms, then changed its mind, then threw a hissy fit and abused the law to punish them when they didn't agree to a retroactive changing of the terms.

As a private company, Anthropic is entitled to negotiate whatever contract it wants, and its customers can accept or decline. If it doesn't want to license its rightful private property to be used for certain purposes, and apply this fairly and equally to everyone (it's not picking on the DOW here, nobody is allowed to use its AI for autonomous weapons or mass surveillance), that's their right as a private company. If you don't like that then don't sign a contract with them. Nobody has a right to their AI, it's theirs. That's how the free market is supposed to work. The government can't just call people terrorists or supply chain risks in retaliation for not giving them extra favorable terms in contract negotiations. That's fascism, in a literal non-exaggerated way, that's what that term actually means.

As a practical matter how would Anthropic’s terms be enforced?

Their only real lever is to cut off access, and that could happen without warning in a way that gets people killed.

Tech companies also don’t have a great track record of judging when a user has violated their terms.

So the risk is that Anthropic could revoke the license essentially on a whim.

Their only real lever is to cut off access, and that could happen without warning in a way that gets people killed.

They are not serving Claude from AWS for use in highly privileged environments, so it's not clear how this could be done. The question is one of model alignment.

The problem is that the DOW agreed to their terms, then changed its mind, then threw a hissy fit and abused the law to punish them when they didn't agree to a retroactive changing of the terms.

I'm seeing this framing thrown around a lot, but no actual evidence its true. Like, what is the actual, accepted and in-force contractual provision that Anthropic and the DoW are disagreeing on? Because the OP and reporting both state this as a provision under negotiation, not in-force.

Contracts are not public. However -

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal.

https://www.anthropic.com/news/statement-department-of-war

They've already got contracts, the DoD isn't happy and is trying to strongarm them into a broader contract.

I'm not sure that this applies to national security critical technologies. Certainly I don't think Lockheed could demand that the DOW agree not to use the F35 to bomb on Sundays. And it gets even dicier if Lockheed gets to make decisions about whether specific actions violate the restrictions.

I agree the designation is overkill in retaliation, but there is a core DOW claim that private companies supplying critical technologies should not overstep into making specific operational decisions.

the only AI company with models deployed on the USG's classified networks

Doesn’t MS/Copilot have some sort of a classified solution?

models deployed on the USG's classified networks

When I read this (with respect to Claude) I'm not thinking operational networks, like the Air Force and Army have a secret level network (SIPRNET) for mission planning. I'm thinking of the top secret, compartmentalized networks of the intelligence agencies. Whole other beast and a classified solution authorized for the former may not be authorized for the latter.

Microsoft doesn't have its own models, Copilot is a brand for its AI offerings, but the models are all licensed from other companies (primarily OpenAI).

Still not Anthropic in that case, though.