site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Anthropic has always been open that their founding principle is that AI must not be used in certain ways, and their mission has always been to to develop AI and enforce that it cannot be used in those ways, becoming dominant in the space to make sure that others can’t break that pact.

Putting aside the specific ethics of the matter, you can see why the government doesn’t like Anthropic attempting to use a market dominant position to impose its ethics policy on them. You can also see why the engineers who are sweating over this thing want to say how it’s used. Ultimately the government is far more powerful and therefore it’s legitimate desires get respected over Anthropic’s legitimate desires.

That said, including the OpenAI board fiasco, this is the second time Anthropic and EA have stepped on this rake. Customers do not like you asserting your ideology over their needs.

Customers do not like you asserting your ideology over their needs.

I don't share historic OpenAI's or Anthropic's concerns about being paperclipped by an accidental AI god, so I disagree with many of their positions on AI ethics. But both Microsoft and the DoD made business agreements knowing and agreeing to respect the other party's principles, and both reneged the moment it was inconvenient to keep their words. I can't really respect that, any more than I can respect the business leaders who appealed to their people's ideals as long as it was convenient and then sold them out for money.

Sure. And I had some sympathy with Anthropic on the issues, actually, both times.

I'm more remarking that Anthropic's leadership has consistently seriously overestimated how much ability they have to hold stuff hostage, and underestimated how much customers dislike being earnestly told that what they want is very naughty.

Now, personally I want to generate sexy stories about vampires rather than make autonomous killbots, but IMO it generates really serious ill will when you the user think that something is okay and then the AI either huffs and turns up its nose at you, or quietly sabotages and undercuts you. I doubt Anthropic have reckoned with how much it pisses off career soldiers to be told that killing people is bad, actually.

I mean, current kerfuffle aside (which you have to admit is highly contingent, there's no way anything like this plays out if Trump isn't president), Anthropic seems to be doing really well commercially? It has the fastest revenue growth of any of the AI companies (and on current trends would overtake OpenAI in the next year or so) and seems to be the leader in integration into workflows etc. Given it's rather paltry free tier adoption and rather high API rates it's likely already significantly profitable on marginal inference basis. I'm not at all convinced that it's ethical stance is hurting it (and it's virtue ethics approach may in fact relate to why it tends to have lower refusal rates then OpenAI and Gemini). I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots (the point of distinction, Anthropic did not prohibit the AI from helping kill people, only doing so completely autonomously), I'd don't think they'd necessarily want to be out of a job.