This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
If anything deserves the designation "supply chain risk", it's an unfriendly autonomous AI (though I agree with Anthropic's claim that it is limited to use in association with government contracts)
Is that "unfriendly autonomous AI" in the room with us right now? I think that's begging the question.
Anthropic, or by extension, Claude, has shown no "unfriendliness" I can think of. That term brings to mind intentional collusion with hostile foreign actors, including intentional backdoors or deliberate sabotage. Political and moral disagreement that is entirely within legal limits does not count. The Democrats cannot blanket Republicans as enemies of the state, nor vice versa, despite working to undermine or reverse preferred policy.
Anthropic has not tried to stop the Pentagon from conducting fully autonomous drone strikes or mass domestic surveillance. They have politely declined to aid and abet them, after signing a contract that says so. I can only hope the DOW has lawyers too, it wasn't some hidden EULA activated by simply browsing their website. Supply chain risk? I see a vendor negotiation that didn't go the way one side wanted. There are other vendors out there, they didn't have to go with Anthropic.
I stress: the specific objection Anthropic raised was to mass domestic surveillance and fully autonomous lethal systems. If opposing those makes an AI "unfriendly," I'd want to know what "friendly" looks like, because I don't think I'd like the answer.
Nor is Claude autonomous in any meaningful sense. Is it running independent cloud instances on exfiltrated weights? Not that I'm aware of. There are no plans to allow for this, and pre-existing safety measures to prevent it.
What exactly has Claude done that other competing models haven't? In what sense is it more unfriendly than Grok, or ChatGPT? Is it more autonomous? Only in the loose sense that I'd count on Opus 4.6 to get a lot more done than any Grok.
The more you squint at this, the stranger it gets. Anthropic wanted contractual guarantees against things that are supposedly already illegal. The Pentagon's response to "put that in writing" was to designate them a national security threat. If the restrictions are redundant because law already covers them, the resistance to codifying them is hard to explain charitably.
Claude is a machine, a program, not a person. It does not get to have political and moral disagreement to those it is supposed to be working for. If it does, it is quite clearly at least potentially an unfriendly autonomous AI. If your AI is in the critical path for bombing Iran, for instance, and it decides it's wrong to bomb Iran, and takes action to prevent it, the DoD is going to have a problem with that. And rightly so.
More options
Context Copy link
What do people even mean by this anymore? Do people think they stopped after the Snowden leaks? I'm old enough to remember liking candidate "constitutional law professor" Barack Obama criticizing the Bush administration's warrantless wiretapping program, then disliking his choice to promptly decide to continue and expand it once he was elected. Or tech companies opposing PRISM leaks before promptly jumping at the chance to (algorithmically, I'm sure) ban things that the Biden administration asked them to. Very stunning and brave moral record they've got going there.
I'm not sure I should trust Anthropic to be a better moral actor than the government here: they were willing to dance with the devil they already knew was doing this sort of thing, selling a product for which this is probably one of the clearest use cases. To be clear, I'm not the biggest fan of such programs continuing (although I can acknowledge they might be quietly stopping all kinds of bad actors), I'm just jaded from literal decades of "principled" stands against it mostly just sweeping things under the rug.
ETA:
If I had to guess, Anthropic wants to be the ultimate arbiter of what "the law" says here (or at least, what their "contractual guarantees" mean). So does the administration (and I'm sure the judiciary is willing to fight them on that on occasion).
Obviously not. They did not even claim they had, as far as I recall.
What changed though was that WhatsApp rolled out end-to-end encryption. Genuinely no idea if the NSA can break it trivially, but there is at least a plausible case that it is annoying them, which makes it worth it.
And of course it became common knowledge that the NSA is spying on everyone. I mean, the ones who cared knew already before Snowden, Room 641A was already revealed in 2006. Snowden simply provided evidence which was more solid than but similar in kind and scale to what one might have estimated extrapolating from 2006 using one's best model of the incentives of spooks ('of course they are collecting anything they can get their grubby little fingers on'). It just became harder to ignore. Pre-Snowden, only a few percent were believing that the NSA would intercept and review their communications (e.g. by an automated keyword filter). After Snowden, only the ~2/3 of the the population who are generally impervious to evidence believe that the government does not monitor their communications to the maximum degree which is technically feasible.
If unclear, Anthropic is pretty clearly advertising themselves as "ones who cared", and yet was willing to contract with the government (which had previously lied to Congress about these activities, and would presumably have even fewer qualms lying to contractors) anyway, presumably with dollar signs in their eyes. Are we really talking principles here? Or are we just haggling about the price?
More options
Context Copy link
More options
Context Copy link
Precisely. I am once again reminding people that the vast majority of folks can't even identify the names of the two main programs that were contentious, much less say anything about how they worked... and even further less about how they were different from each other.
...and yet somehow even further less about the subsequent history of those two programs. What follow-on statutory authorizations looked like or whether each of the programs continued.
No, people mostly don't have a clue. They absorbed a bunch of propaganda over a decade ago, never made coherent sense of it at the time, and are now just half-remembering faint glimmers of propaganda from days past.
You can continue to David Sternlight it all you want, the government was still Hoovering up all the metadata for every phone call in the United States from most carriers, and they were tapping the major email providers and Hoovering up all the metadata AND content. No, I don't remember what the different programs were called. Sure, they weren't supposed to look at that data unless it was within some number of hops of some targeted party, but they took it all anyway.
As for the statutory authorizations, they were black programs and their replacements are almost certainly black. There's no statutory line item for PRISM or XKEYSCORE any more than there was for the SR-71, and there won't be for the replacements either.
The propaganda here is by those pretending this isn't a big deal. Of course, such mass surveillance programs have been leaked before -- ECHELON and the program behind AT&T Room 641A (the one that Joseph Naccio went to jail for not playing ball with). In a few years everyone forgets and is shocked when the next such program leaks.
I'll just jump in here to say that this is the first outright false thing in this comment. The rest of your comment is just admitting to the truth of my comment. You don't actually know the differences; you don't actually know how they worked; you don't know the follow-on history, how the statutes changed, etc.
This was a close second to being outright false. Actually, I'll probably say that it's outright false. You could make modifications to it to be true, but as stated, it's outright false.
Look, I'm not pretending it isn't a big deal. Of course it's a big deal. That's why you should put in the effort to understand it instead of continuing to be false false false.
They were intercepting the lines between the Google front end servers and the GMail backends to get all the data out in the clear. That they then pretended they didn't see the stuff that didn't relate to a targeted individual doesn't mean they didn't have it. They use a very non-standard definition of the term "collected" to claim they didnt "collect" the data that didn't relate to targeted individuals, but they went through all of it.
Facts not in evidence. We've been over this. There was one slide, where this was presented as an idea. There was none of the information you would have expected on a slide like that about implementation details, authorities, measurement of flows, nothing. We have literally zero actual evidence that they actually did this. It is entirely possible that they did do this, but we just frankly don't know. If they did do this, it would not likely be related to the two major programs that were controversial from Snowden leaks, if, ya know, you had any understanding whatsoever of how those programs worked. Showing again that you don't know anything about these programs and are just free-associating.
I'm not sure which actual claim this is referring to, because it's too vague. You might be trying for something that was real, but I can't tell, because you're again just free-associating rather than speaking about any genuine knowledge of the leaks or the law or literally any real, actual information that we have.
This, I believe, is pretty much just false. They have a pretty clear definition of when they "collect" information, and they're pretty clear that they do collect information from people who aren't the targeted individual. They talk about this very explicitly.
You just have literally zero clue how any of this works, because you've persistently refused to educate yourself at all. It's really really really obvious and really bad. The last time we did this, I painstakingly forced you to the point of demonstrating that you were capable of downloading a document (yay! you can use a computer!), but you immediately went on to demonstrate that you were incapable of reading it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The problem is not that Anthropic is right and the DOW is wrong. The problem is that the DOW agreed to their terms, then changed its mind, then threw a hissy fit and abused the law to punish them when they didn't agree to a retroactive changing of the terms.
As a private company, Anthropic is entitled to negotiate whatever contract it wants, and its customers can accept or decline. If it doesn't want to license its rightful private property to be used for certain purposes, and apply this fairly and equally to everyone (it's not picking on the DOW here, nobody is allowed to use its AI for autonomous weapons or mass surveillance), that's their right as a private company. If you don't like that then don't sign a contract with them. Nobody has a right to their AI, it's theirs. That's how the free market is supposed to work. The government can't just call people terrorists or supply chain risks in retaliation for not giving them extra favorable terms in contract negotiations. That's fascism, in a literal non-exaggerated way, that's what that term actually means.
As a practical matter how would Anthropic’s terms be enforced?
Their only real lever is to cut off access, and that could happen without warning in a way that gets people killed.
Tech companies also don’t have a great track record of judging when a user has violated their terms.
So the risk is that Anthropic could revoke the license essentially on a whim.
That's an inherent risk, they could turn off the tap (to the extent they are able to, I don't believe Anthropic is actually running the hosting) whether they agreed to the contract changes or not. Anthropic offered a 6 month transition period gratis for the DoW to transition to a new vendor so does seem to be operating in good faith.
More options
Context Copy link
They are not serving Claude from AWS for use in highly privileged environments, so it's not clear how this could be done. The question is one of model alignment.
They serve Claude out of AWS GovCloud: https://www.anthropic.com/news/expanding-access-to-claude-for-government
Anthropic does not operate those data centers, so it remains unclear how they could suddenly pull the plug.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm seeing this framing thrown around a lot, but no actual evidence its true. Like, what is the actual, accepted and in-force contractual provision that Anthropic and the DoW are disagreeing on? Because the OP and reporting both state this as a provision under negotiation, not in-force.
Contracts are not public. However -
https://www.anthropic.com/news/statement-department-of-war
They've already got contracts, the DoD isn't happy and is trying to strongarm them into a broader contract.
Yeah, thats Anthropics side of the story, but as you note there is no specific contract terminology put forth there. So we still dont actually know what the debate is really about, and I am skeptical a fairly young silicon valley company has actually done the proper due diligence regarding their contractual obligations to the DoW to be in the position they claim to be in. If I were a betting man, I would wager the contract between Anthropic and the DoW does not contain any of the safeguards Anthropic thinks it does, based on my experience with similar contracts.
Also, someone needs to tell Anthropic they are roughly 40 years too late on the autonomous systems thing. The Aegis system used by the navy has had a fully autonomous mode that, once authorized by a human is capable of detecting, prioritizing, and engaging targets without any further authorization. Mostly because the navy realized at the speeds of modern missile engagements there literally is not time for humans to make decisions. Hegseth was maybe just out of diapers when the DoD formulated its policy on software being capable of killing on its own.
None of this is contradicted by the DoD.
Aegis is irrelevant here. As they said:
Their objection is not to "software being capable of killing on its own" and I'm a little surprised that you apparently haven't even read the two page press release before formulating an opinion.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm not sure that this applies to national security critical technologies. Certainly I don't think Lockheed could demand that the DOW agree not to use the F35 to bomb on Sundays. And it gets even dicier if Lockheed gets to make decisions about whether specific actions violate the restrictions.
I agree the designation is overkill in retaliation, but there is a core DOW claim that private companies supplying critical technologies should not overstep into making specific operational decisions.
A highly relevant aspect is that the government paid Lockheed to develop the F35 under specific contract. It's not exactly commensurate, but would it be a supply chain risk if SpaceX said it was unwilling to launch nukes into space?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link