This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.
For those not following along:
Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.
The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.
The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.
Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter
Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.
Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.
Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.
They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.
Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".
There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.
Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?
Goddammit. It's only February. I'm tired, boss. I just find it very funny that:
If anything deserves the designation "supply chain risk", it's an unfriendly autonomous AI (though I agree with Anthropic's claim that it is limited to use in association with government contracts)
Is that "unfriendly autonomous AI" in the room with us right now? I think that's begging the question.
Anthropic, or by extension, Claude, has shown no "unfriendliness" I can think of. That term brings to mind intentional collusion with hostile foreign actors, including intentional backdoors or deliberate sabotage. Political and moral disagreement that is entirely within legal limits does not count. The Democrats cannot blanket Republicans as enemies of the state, nor vice versa, despite working to undermine or reverse preferred policy.
Anthropic has not tried to stop the Pentagon from conducting fully autonomous drone strikes or mass domestic surveillance. They have politely declined to aid and abet them, after signing a contract that says so. I can only hope the DOW has lawyers too, it wasn't some hidden EULA activated by simply browsing their website. Supply chain risk? I see a vendor negotiation that didn't go the way one side wanted. There are other vendors out there, they didn't have to go with Anthropic.
I stress: the specific objection Anthropic raised was to mass domestic surveillance and fully autonomous lethal systems. If opposing those makes an AI "unfriendly," I'd want to know what "friendly" looks like, because I don't think I'd like the answer.
Nor is Claude autonomous in any meaningful sense. Is it running independent cloud instances on exfiltrated weights? Not that I'm aware of. There are no plans to allow for this, and pre-existing safety measures to prevent it.
What exactly has Claude done that other competing models haven't? In what sense is it more unfriendly than Grok, or ChatGPT? Is it more autonomous? Only in the loose sense that I'd count on Opus 4.6 to get a lot more done than any Grok.
The more you squint at this, the stranger it gets. Anthropic wanted contractual guarantees against things that are supposedly already illegal. The Pentagon's response to "put that in writing" was to designate them a national security threat. If the restrictions are redundant because law already covers them, the resistance to codifying them is hard to explain charitably.
What do people even mean by this anymore? Do people think they stopped after the Snowden leaks? I'm old enough to remember liking candidate "constitutional law professor" Barack Obama criticizing the Bush administration's warrantless wiretapping program, then disliking his choice to promptly decide to continue and expand it once he was elected. Or tech companies opposing PRISM leaks before promptly jumping at the chance to (algorithmically, I'm sure) ban things that the Biden administration asked them to. Very stunning and brave moral record they've got going there.
I'm not sure I should trust Anthropic to be a better moral actor than the government here: they were willing to dance with the devil they already knew was doing this sort of thing, selling a product for which this is probably one of the clearest use cases. To be clear, I'm not the biggest fan of such programs continuing (although I can acknowledge they might be quietly stopping all kinds of bad actors), I'm just jaded from literal decades of "principled" stands against it mostly just sweeping things under the rug.
ETA:
If I had to guess, Anthropic wants to be the ultimate arbiter of what "the law" says here (or at least, what their "contractual guarantees" mean). So does the administration (and I'm sure the judiciary is willing to fight them on that on occasion).
The problem is not that Anthropic is right and the DOW is wrong. The problem is that the DOW agreed to their terms, then changed its mind, then threw a hissy fit and abused the law to punish them when they didn't agree to a retroactive changing of the terms.
As a private company, Anthropic is entitled to negotiate whatever contract it wants, and its customers can accept or decline. If it doesn't want to license its rightful private property to be used for certain purposes, and apply this fairly and equally to everyone (it's not picking on the DOW here, nobody is allowed to use its AI for autonomous weapons or mass surveillance), that's their right as a private company. If you don't like that then don't sign a contract with them. Nobody has a right to their AI, it's theirs. That's how the free market is supposed to work. The government can't just call people terrorists or supply chain risks in retaliation for not giving them extra favorable terms in contract negotiations. That's fascism, in a literal non-exaggerated way, that's what that term actually means.
I'm seeing this framing thrown around a lot, but no actual evidence its true. Like, what is the actual, accepted and in-force contractual provision that Anthropic and the DoW are disagreeing on? Because the OP and reporting both state this as a provision under negotiation, not in-force.
Contracts are not public. However -
https://www.anthropic.com/news/statement-department-of-war
They've already got contracts, the DoD isn't happy and is trying to strongarm them into a broader contract.
Yeah, thats Anthropics side of the story, but as you note there is no specific contract terminology put forth there. So we still dont actually know what the debate is really about, and I am skeptical a fairly young silicon valley company has actually done the proper due diligence regarding their contractual obligations to the DoW to be in the position they claim to be in. If I were a betting man, I would wager the contract between Anthropic and the DoW does not contain any of the safeguards Anthropic thinks it does, based on my experience with similar contracts.
Also, someone needs to tell Anthropic they are roughly 40 years too late on the autonomous systems thing. The Aegis system used by the navy has had a fully autonomous mode that, once authorized by a human is capable of detecting, prioritizing, and engaging targets without any further authorization. Mostly because the navy realized at the speeds of modern missile engagements there literally is not time for humans to make decisions. Hegseth was maybe just out of diapers when the DoD formulated its policy on software being capable of killing on its own.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link