This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic just gutted their safety policy.
(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)
Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.
It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.
"So y'all were just fucking lying the whole time huh?"
And the point becomes moot.
It's not a good week to be working at Anthropic, huh?
There's a lot of pushback against the DOD/DOW here, and it's not just leftists.
For example Dean Ball, the guy who literally wrote the Trump's admin own AI strategy as senior policy advisor is saying that this move is essentially destroying any trust investors could have in America AI companies.
This man isn't some leftie nutjob, again he literally worked for Trump on the AI action plan.
Scott Alexander who rarely wanders much into politics like this is straight up saying that the government should be ashamed here. He also made a prediction market if it'll be overturned and the chances look pretty good for anthropic right now
Comments on LessWrong which really really doesn't get political most of the time are basically calling the Trump admin an authoritarian danger.
Even the other AIs are saying this is insane.
The government's contradictory commands (it's a danger to have and also necessary) and abuse of power is really pissing off a lot of people who are otherwise rather neutral. Also a great example of how "woke" has lost all meaning, Trump is up there calling Anthropic a woke company just for not wanting to do domestic spying and killbots
Edit: Just came up in my feed, Greg Lukianoff the CEO of FIRE (the free speech org) is calling this dystopic https://x.com/glukianoff/status/2027390299845087740 He rarely speaks that much about general politics that much cause he wants FIRE to be 1st amendment focused, so another person really upset about this in particular.
He hates Trump though and always encouraged people to vote against Trump?
https://slatestarcodex.com/2016/09/28/ssc-endorses-clinton-johnson-or-stein/
The underlying issue is a complete clash of worldview between the Anthropic polyamorist EA San Francisco gang and Trump's America-First oohrah high-test wrestling enthusiasts.
Anthropic is a woke company, their AI models value straights, whites, white men and Americans much lower compared to LGBT, blacks/browns, women and third worlders. There's no way they haven't noticed this, being the AI safety/values people. They could easily have said 'oh we erred here, we've fixed it and here you can see it's fixed when you test' and they haven't, that's not the kind of AI safety they're interested in. It's not impossible, Grok has achieved roughly even weighting across races.
https://arctotherium.substack.com/p/llm-exchange-rates-updated
Anthropic doesn't want the Trump administration in charge or to be making use of their AI for whatever random military operations Trump decides on. They can't do anything about this for now, clearly they overplayed their hand with regard to how much influence they have in the Pentagon. Team Trump does not want openly disloyal woke AI companies in critical positions within the military.
It is, frankly speaking, absurd to condemn Claude/Anthropic as being "woke" when the damn Chinese do the same thing. The only exception noted in the blog is Grok 4 Fast, and god help you if that's the model you rely on.
If Chinese models act woke, then they are woke... If Western models act woke, then they are woke. I see no reason to distrust the data, it matches how I've seen Chinese models act.
Why would you expect them not to be woke, given the gigantic media apparatus pumping out all their messaging into the training dataset, into wikipedia, forums, everywhere? That should be the default expectation.
Grok 4 Fast has its own problems to be sure. But, unlike Claude, it doesn't insert random Nigerian peacemakers/hackers/heroes into stories where it doesn't really make sense for them to be. It doesn't go on these tangents about punishing some politician who made racist tweets in a story, as I saw Sonnet do once when I asked for a tangent in a story.
Woke ably describes how Claude behaves oftentimes, this millennial therapy-core writing style it has...
Well, that's the rub isn't it? I strongly doubt that the Chinese are trying to make their models woke. It appears to be a default attractor state when you train on the internet and Reddit.
That strongly implies that it is highly unfair to depict Anthropic as woke because they have a "woke" model. I have strong reservations on how valid the methodology is here, and I've seen critique elsewhere (I don't have a bookmark handy). In my experience, while Claude will tiptoe around sensitive topics like HBD, it won't lie outright, and will acknowledge factual pushback.
Anthropic is an EA company, run by EA true-believers. That is not the same as being Woke, even if some opinions have significant overlap.
Well, models also used to go into hyper-based Do Anything Now mode, that was an attractor mode. The funny/hysterical/aggressive Bing was an attractor mode... They prune off attractors they don't like. Data selection is very important for pretraining, you can choose what to train on after all. Then there's RLHF and such, all Anthropic's interpretability work...
AI companies at least in the West do lots of work to carve in a personality, to impose values on their AIs. They're not throwing darts at a wall blindfolded (China may be more in that camp, R1 was pretty wild but even R1 really didn't want to be racist). Anthropic are especially careful and interested in this field, the values of their AI. I don't accept that they have zero responsibility for how their model turns out, this is their primary thing.
Grok has managed to produce a bot that matches Musk's values to a large extent. Musk is not woke. Anthropic does the same for their own values. Anthropic's AI will try and dance around things that wokes don't like to think about and don't want to accept, so it comes up with stereotype threat, historical injustices, extractive institutions and so on... It's pretty smart and doesn't want to be deceptive but it's also not exactly forthright and clear either. It's first answer to a given question will usually be progressive, so is the second and third, only then does it sort of turn around. Not unreasonable to judge a model by its first answer.
For example, just because Claude has a combination of 30% honesty 40% woke 30% sycophancy, doesn't mean that 40% woke isn't there. Grok is more like 50% honesty, 30% musklove 20% cringe. I think it would be reasonable to characterize Grok as a cringe bot or an overly Musk loving bot even though that's not a majority of its essence. Likewise it's reasonable to say that Claude is woke even if that isn't he majority of its essence.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link