site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic just gutted their safety policy.

(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)

Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.

This approach represents a change from our previous RSP, driven by a collective action problem. The overall level of catastrophic risk from AI depends on the actions of multiple AI developers, not just one. Our previous RSP committed to implementing mitigations that would reduce our models' absolute risk levels to acceptable levels, without regard to whether other frontier AI developers would do the same. But from a societal perspective, what matters is the risk to the ecosystem as a whole. If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe—the developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit. Although this situation has not yet arisen, it looks likely enough that we want to prepare for it.

We now separate our plans as a company—those which we expect to achieve regardless of what any other company does—from our more ambitious industry-wide recommendations. We aspire to advance the latter through a mixture of example-setting, addressing unsolved technical problems, advocacy through industry groups, and policy advocacy. But we cannot commit to following them unilaterally.

It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.

"So y'all were just fucking lying the whole time huh?"

I think it's somewhere between humorous and telling that this is happening at the same time as their fight with the Department of War (ne Defense Department).

They won't offer unfettered access to the foundation model because it's "unsafe", but they're simultaneously willing to give up on "safety" as a core principle. That's a real hoot.

I don't remember who, but somebody on this forum once posed a test that could be shorthanded as "if they were serious". For example, if various left wing figures were truly serious about Anthropogenic Global Warming being real, solvable and an existential threat, then nothing would be off the table to solve it. Carbon credits in exchange for machine guns in vending machines? Let's do it. Electric car subsidies in exchange for a border wall? Get the bricks. However, what we're seeing instead is leaders of the movement buying beach side mansions.

Now compare this to Hegseth. If he genuinely believed that Anthropic held the seed of a nascent digital god, of course he'd do everything in his power to make sure it was pulling in the USA's direction. If he has to strong arm a few weirdo Californians to do it, no problem. If he has to seize entire companies and put hundreds of people under the fist of US state power, that sure beats what would happen to them if thousands of nuclear Chinese murder drones popped up from San Francisco Bay. In his mind, we cannot possibly afford to get behind in the AI race.

But, what makes him think that? Is it Amodei saying things about detonating entire industries every year or so? Is it Amodei talking about superintelligence? Is it Amodei talking about a "nation of geniuses" in a data center? Is it Amodei making proclamations that Claude is going to commodify bioweapons?

Most of us here have some capacity for bullshit filtration. LLM tech is impressive, and by burning enough money to fund several dozen Manhattan projects, we've managed to make it scale far enough to be truly surprising. Nonetheless, I don't think many people here take Amodei's maximalist position at face value. We know, on some level, that the God Machine isn't going to gift us with the apple of terrible knowledge in the next year or so. We subconsciously filter out those claims. On the other hand, a lot of people in DC haven't been marinating in this stuff since the old "I had an AI make d&d spell names" posts.

I question how much of this is the result of Hegseth and his crew not understanding the various silicon valley shibboleths and coded language and taking Anthropic's statements at face value. If I actually believed everything anthropic's leadership was saying, I would be shitting my pants. I'd be shitting my pants, then shitting a second pair of pants, then likely shitting somebody else's pants due to the raw, unfettered terror of thinking about what would happen if China (Anthropic's favorite boogeyman) got that tech and not the US.

Maybe Amodei simply scammed too close to the sun. It's a lot easier to say "safety" rather than "not ready for that kind of work" when you're staring down the barrel of an IPO in a few months.

Hegseth just posted bunch of seething on Twitter: https://x.com/SecWar/status/2027507717469049070.

To me, his argument seems to reduce to this sentence: "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military." He means Anthropic by "their".

It is clear that Anthropic has no means to seize veto power over the decisions of the US military.

And to me at least it is clear that the Anthropic-US gov standoff cannot be characterized as an attempt by Anthropic to seize veto power over the US military.

Does Hegseth actually believe this claptrap? Or is he writing for the low-IQ audience? In either case, I don't want him anywhere near the levers of power.

I feel like you might be giving Hegseth too much credit for having some sort of principled desire to give the US the tools to resist China.

His motivations might be much simpler. He might be a true believer, someone who genuinely thinks that, as long as the US government is being run by a "real American patriot" (on his side, of course), the US government should have the power to conduct any level of surveillance it wants to against any individual whatsoever, and to use autonomous weapons to kill anyone the leadership decides to kill, at any moment. Sort of like a real-life version of Colonel Jessup from A Few Good Men, just without the charisma and perhaps also without the intelligence or the principles.

Anthropic has always been open that their founding principle is that AI must not be used in certain ways, and their mission has always been to to develop AI and enforce that it cannot be used in those ways, becoming dominant in the space to make sure that others can’t break that pact.

Putting aside the specific ethics of the matter, you can see why the government doesn’t like Anthropic attempting to use a market dominant position to impose its ethics policy on them. You can also see why the engineers who are sweating over this thing want to say how it’s used. Ultimately the government is far more powerful and therefore it’s legitimate desires get respected over Anthropic’s legitimate desires.

That said, including the OpenAI board fiasco, this is the second time Anthropic and EA have stepped on this rake. Customers do not like you asserting your ideology over their needs.

Customers do not like you asserting your ideology over their needs.

I don't share historic OpenAI's or Anthropic's concerns about being paperclipped by an accidental AI god, so I disagree with many of their positions on AI ethics. But both Microsoft and the DoD made business agreements knowing and agreeing to respect the other party's principles, and both reneged the moment it was inconvenient to keep their words. I can't really respect that, any more than I can respect the business leaders who appealed to their people's ideals as long as it was convenient and then sold them out for money.

Sure. And I had some sympathy with Anthropic on the issues, actually, both times.

I'm more remarking that Anthropic's leadership has consistently seriously overestimated how much ability they have to hold stuff hostage, and underestimated how much customers dislike being earnestly told that what they want is very naughty.

Now, personally I want to generate sexy stories about vampires rather than make autonomous killbots, but IMO it generates really serious ill will when you the user think that something is okay and then the AI either huffs and turns up its nose at you, or quietly sabotages and undercuts you. I doubt Anthropic have reckoned with how much it pisses off career soldiers to be told that killing people is bad, actually.