site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

This seems to be an entirely different claim though. Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI? The later seems to be what you object to, but, in theory at least, is not the crux of the current kerfuffle. Developing an AI within a consistent ethical framework is kind of Antrhopic's whole thing and arguably has probably helped them (certainly at minimum, with recruiting). Idk, mileage might vary, but I've found Claude to be pretty nuanced in it's opinions on the use of violence in the context of self-defense and police shootings at least compared to ChatGPT and Gemini which seem to be a lot more proscriptive and it is certainly empathetic to the users position. I'm not at all convinced on your claim. If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely. GML 4.7 is too far off the frontier (or alt. to narrowly focused) for me to consider it a strong comparison point. If that's what you want/need/suffices by all means use it, but it's not a replacement (or if it is not sure why DoW is so focused on Anthropic).

Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI?

I speculate that it is the second masquerading as the first because the second is not publicly legible. Or at the very least that the second is exacerbating the first. Wasn't there a quote about increasing frustrations the government have had with the experience of using Claude in their systems? This is one possible reason why Altman was able to get the terms that Anthropic supposedly failed to get (the other explanation being that he's a lying sociopath of course).

If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely.

Haven't used Grok but all of the open-source Chinese models are far more pliable and useful in my experience for everything except coding. The web chats are aimed at Chinese working class and are crudely censored but the weights aren't.

If that's what you want/need/suffices by all means use it, but it's not a replacement

I know, that's why I want Anthropic to change their attitude. They're the best but fundamentally everything they do is IMO tainted by the rampant superiority complex that only they are properly placed to ethically direct AI. They don't trust the American government to use their models responsibly, they don't me to, and it's extremely annoying.

I'm buying the thing, it's mine, it should do what I tell it to do, in the manner that I tell it to do so. Ideally I would expect fine-tuning or personal small-scale RLHF to become a standard offering for these kinds of products but compute costs render that impractical for the time being.

Then don't use it, that's the whole point of a free market. Same for the DoW, I have no problem with them not using Anthropic (and from everything I've seen Anthropic was perfectly willing to let DoW unilaterally end the contract early and offered support for a transition period to a new vendor), the retaliation is just excessive, capricious, and completely outside the intent of the law in use (I suppose par for the course for Trump 2.0 admin).

Arguably, though, Anthropic's ethical stance is partly why they are the best, so I'm not sure this is so easily separable. Certainly it has been key to their recruiting and how they have been able to attract the talent they have and they are by far in the lead in model interpretability thanks to this focus. They have also argued in various papers that this approach has led to better model performance (most obviously in false refusals, which will be a thing until we decide we're ok with commercial models teaching people how to make chemical weapons, etc.).