This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic declared a "Supply-Chain Risk to National Security" by SecWar Hegseth via tweet, because that's the universe we live in.
For those not following along:
Anthropic has had a contract with the Pentagon - valued at up to $200 million - since July 2024, making it the only AI company with models deployed on the USG's classified networks. Over several months, negotiations broke down over two specific safeguards Anthropic wanted built into any agreement: a prohibition on using Claude for mass domestic surveillance of Americans, and a prohibition on using it to power fully autonomous weapons systems. I stress fully autonomous, and the only reason Yudkowsky isn't spinning in his grave is that he's still alive. I'm not sure he enjoys it.
The Pentagon's position was that it has its own internal policies and legal standards, that mass surveillance and autonomous weapons are already regulated by law, and that it shouldn't have to negotiate individual use cases with a private company. It demanded that all AI firms make their models available for "all lawful purposes," full stop.
The Pentagon set a hard deadline of 5:01 PM Friday for Anthropic to drop its two exceptions. Amodei publicly refused to budge on either point. The deadline passed without agreement.
Shortly after, Hegseth declared Anthropic a "supply chain risk to national security," announcing that effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. CBS News article for those not fond of Twitter
Around the same time, Trump ordered every federal agency to immediately cease using Anthropic's technology, while allowing a six-month phase-out period for agencies like the DOW already using it.
Declaring a company a supply chain risk is typically reserved for businesses operating out of adversarial countries, Huawei for example. As far as I can tell, Anthropic is correct it in describing it as an unprecedented action when applied to an American companies. Especially one that, as far as I can see, hasn't done anything wrong except refuse to jump when asked.
Anthropic says it will challenge any supply chain risk designation in court, calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." Anthropic's press statement.
They also argue that under federal law, the designation can only apply to the use of Claude as part of Pentagon contracts, and cannot affect how contractors use Claude to serve other customers.
Not one to let an opportunity or a still-warm corpse go, Altman announced that OAI had struck a deal with the Pentagon. Using speech so smarmy that I'm not sure if there's anything there at all, Altman claims the deal preserved the same core principles Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. I am unsure why the USG would find this any more acceptable than when Anthropic did it, except they (quite reasonably) expect Altman to be more "morally flexible".
There's a petition circulating where hundreds of Google and OAI employees publicly ask their respective corporate overlords to stand with Anthropic. Apparently all signatures are validated.
Meanwhile, Scott, mild-mannered to a fault, and very loathe to dip his toes into political waters, is losing it on Twitter . And I agree with him. If the DOW finds Anthropic's terms so unbearable, that should have been considered before signing the contract. If they changed their mind, they ought to have canceled and accepted whatever penalties that involved, instead of using the full weight of the state for what can only be described as bullying. If domestic mass surveillance and fully automated weaponry are legally off the table, then why all the fuss over that in a legal document?
Goddammit. It's only February. I'm tired, boss. I just find it very funny that:
If anything deserves the designation "supply chain risk", it's an unfriendly autonomous AI (though I agree with Anthropic's claim that it is limited to use in association with government contracts)
Is that "unfriendly autonomous AI" in the room with us right now? I think that's begging the question.
Anthropic, or by extension, Claude, has shown no "unfriendliness" I can think of. That term brings to mind intentional collusion with hostile foreign actors, including intentional backdoors or deliberate sabotage. Political and moral disagreement that is entirely within legal limits does not count. The Democrats cannot blanket Republicans as enemies of the state, nor vice versa, despite working to undermine or reverse preferred policy.
Anthropic has not tried to stop the Pentagon from conducting fully autonomous drone strikes or mass domestic surveillance. They have politely declined to aid and abet them, after signing a contract that says so. I can only hope the DOW has lawyers too, it wasn't some hidden EULA activated by simply browsing their website. Supply chain risk? I see a vendor negotiation that didn't go the way one side wanted. There are other vendors out there, they didn't have to go with Anthropic.
I stress: the specific objection Anthropic raised was to mass domestic surveillance and fully autonomous lethal systems. If opposing those makes an AI "unfriendly," I'd want to know what "friendly" looks like, because I don't think I'd like the answer.
Nor is Claude autonomous in any meaningful sense. Is it running independent cloud instances on exfiltrated weights? Not that I'm aware of. There are no plans to allow for this, and pre-existing safety measures to prevent it.
What exactly has Claude done that other competing models haven't? In what sense is it more unfriendly than Grok, or ChatGPT? Is it more autonomous? Only in the loose sense that I'd count on Opus 4.6 to get a lot more done than any Grok.
The more you squint at this, the stranger it gets. Anthropic wanted contractual guarantees against things that are supposedly already illegal. The Pentagon's response to "put that in writing" was to designate them a national security threat. If the restrictions are redundant because law already covers them, the resistance to codifying them is hard to explain charitably.
What do people even mean by this anymore? Do people think they stopped after the Snowden leaks? I'm old enough to remember liking candidate "constitutional law professor" Barack Obama criticizing the Bush administration's warrantless wiretapping program, then disliking his choice to promptly decide to continue and expand it once he was elected. Or tech companies opposing PRISM leaks before promptly jumping at the chance to (algorithmically, I'm sure) ban things that the Biden administration asked them to. Very stunning and brave moral record they've got going there.
I'm not sure I should trust Anthropic to be a better moral actor than the government here: they were willing to dance with the devil they already knew was doing this sort of thing, selling a product for which this is probably one of the clearest use cases. To be clear, I'm not the biggest fan of such programs continuing (although I can acknowledge they might be quietly stopping all kinds of bad actors), I'm just jaded from literal decades of "principled" stands against it mostly just sweeping things under the rug.
ETA:
If I had to guess, Anthropic wants to be the ultimate arbiter of what "the law" says here (or at least, what their "contractual guarantees" mean). So does the administration (and I'm sure the judiciary is willing to fight them on that on occasion).
Precisely. I am once again reminding people that the vast majority of folks can't even identify the names of the two main programs that were contentious, much less say anything about how they worked... and even further less about how they were different from each other.
...and yet somehow even further less about the subsequent history of those two programs. What follow-on statutory authorizations looked like or whether each of the programs continued.
No, people mostly don't have a clue. They absorbed a bunch of propaganda over a decade ago, never made coherent sense of it at the time, and are now just half-remembering faint glimmers of propaganda from days past.
You can continue to David Sternlight it all you want, the government was still Hoovering up all the metadata for every phone call in the United States from most carriers, and they were tapping the major email providers and Hoovering up all the metadata AND content. No, I don't remember what the different programs were called. Sure, they weren't supposed to look at that data unless it was within some number of hops of some targeted party, but they took it all anyway.
As for the statutory authorizations, they were black programs and their replacements are almost certainly black. There's no statutory line item for PRISM or XKEYSCORE any more than there was for the SR-71, and there won't be for the replacements either.
The propaganda here is by those pretending this isn't a big deal. Of course, such mass surveillance programs have been leaked before -- ECHELON and the program behind AT&T Room 641A (the one that Joseph Naccio went to jail for not playing ball with). In a few years everyone forgets and is shocked when the next such program leaks.
I'll just jump in here to say that this is the first outright false thing in this comment. The rest of your comment is just admitting to the truth of my comment. You don't actually know the differences; you don't actually know how they worked; you don't know the follow-on history, how the statutes changed, etc.
This was a close second to being outright false. Actually, I'll probably say that it's outright false. You could make modifications to it to be true, but as stated, it's outright false.
Look, I'm not pretending it isn't a big deal. Of course it's a big deal. That's why you should put in the effort to understand it instead of continuing to be false false false.
They were intercepting the lines between the Google front end servers and the GMail backends to get all the data out in the clear. That they then pretended they didn't see the stuff that didn't relate to a targeted individual doesn't mean they didn't have it. They use a very non-standard definition of the term "collected" to claim they didnt "collect" the data that didn't relate to targeted individuals, but they went through all of it.
Facts not in evidence. We've been over this. There was one slide, where this was presented as an idea. There was none of the information you would have expected on a slide like that about implementation details, authorities, measurement of flows, nothing. We have literally zero actual evidence that they actually did this. It is entirely possible that they did do this, but we just frankly don't know. If they did do this, it would not likely be related to the two major programs that were controversial from Snowden leaks, if, ya know, you had any understanding whatsoever of how those programs worked. Showing again that you don't know anything about these programs and are just free-associating.
I'm not sure which actual claim this is referring to, because it's too vague. You might be trying for something that was real, but I can't tell, because you're again just free-associating rather than speaking about any genuine knowledge of the leaks or the law or literally any real, actual information that we have.
This, I believe, is pretty much just false. They have a pretty clear definition of when they "collect" information, and they're pretty clear that they do collect information from people who aren't the targeted individual. They talk about this very explicitly.
You just have literally zero clue how any of this works, because you've persistently refused to educate yourself at all. It's really really really obvious and really bad. The last time we did this, I painstakingly forced you to the point of demonstrating that you were capable of downloading a document (yay! you can use a computer!), but you immediately went on to demonstrate that you were incapable of reading it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link