This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I'm a critic of "My movement is only the good people, and the bad ones are unrelated." Sorry, but if you lie down with dogs, you wake up with fleas. Your stance would give zero consequences for extremism.
As an example, imagine that there was a gas attack on a public place. The police arrest the perpetrators, and discover that they were all members of the California Chemistry Club. Shortly thereafter, there's another gas attack and the perpetrators are also members of the CCC. And again, and again, and again. It's weird that it keeps happening, but it's not like the Club has any relation to the attacks. No one is accountable for the the perpetrators' actions except themselves. Under your framing, people couldn't even think that the organization might be promoting or benefiting from those actions, because only a small number of their members are carrying out attacks.
If Boogaloo Boys are shooting up police stations, then it's evidence that they're a violent group and should be (formally or informally) punished for that. The alternative is playing whack-a-mole after the fact.
Take half a second and think about this with your brain. "My movement is only good people and bad ones are unrelated" is the conclusion someone would have if they also believe "small number of people did something bad so everyone tangentially related is responsible or guilty"". You're not arguing against me, you're agreeing with me in pointing out the flaws of this logic.
Actually my stance gives full consequences for bad things to the people who do bad things, instead of trying to absolve them. As Reagan once said
Ok in your own framing only a small number did it, so why haven't most other members done anything bad? Maybe the club is so big there are niche insider clubs inside of it that they can't control. Like how the "rationalists" had the niche cult of "zizians" who murdered people. But would you blame someone like Scott Alexander or Yudkowsky for those murders? Do you blame the EA community? Would you blame them for the attempted assassination of Sam Altman? That Yud can claim all he wants that he doesn't want houses firebombed, but his anti AI rhetoric lead to this so he's guilty too.
I don't, I say "wow those individuals who did something bad are bad individuals, I blame them for their own choices and not society". But your logic says Yud is a threat.
And as a member of TheMotte, a rationalist adjacent site, do you accept responsibility for how the fleas you apparently laid down with tried to firebomb Sam Altman's house? I'm going to assume no and that you agree with my argument of "that guy is just that guy, he's not me" once you're being asked to account for bad people of "your group". I'll believe you are sincere in your "personal responsibility for other people being bad" stance when I see you apply it to yourself and accept personal responsibility for bad people existing in your own groups.
I don’t like Yud much so it doesn’t really count but I would lay some blame for anti-AI violence at his door. He’s stoked panic and sky-falling doomerism about it for more than a decade, on the basis of no meaningful knowledge or experience whatsoever, not pausing for even a second when his ideas about how AI would materialise (alien optimisers, FOOM) turned out to be completely wrong. He has consistently advocated for maximally violent approaches to preventing AI, objecting to individual efforts only on the grounds of pragmatism.
In fact, I would say he’s one of the worse people to do this kind of thought experiment on.
Where's the consistency in calling for data centers to be bombed (presumably with employees) but declaring firebombing CEOs responsible for repurposing the entire global economy toward AI data center buildup to be a step too far? If data centers are valid military targets, then surely data center builders would be too.
Individual criminals cannot consistently enforce a world-wide treaty regulating AI development, making violence they commit useless and counterproductive. Only laws adopted and enforced by the most powerful countries in the world can do that. If you kill Altman or blow up a datacenter then you are arrested and they continue with a different CEO or a different datacenter, if you slaughter every OpenAI employee then Anthropic does it, if you somehow personally hunt down and kill everyone in the U.S. who knows what a "transformer" is then China does it. Here is the post he wrote on the subject following the attempted firebombing:
Eliezer Yudkowsky: Only Law Can Prevent Extinction
That sounds like more of an argument of practicality then. Then it's a matter of whether it's easier to reach a democratic mandate in all major countries then autocratic buy-in from Russia and China vs. a small contingent of fanatic extremists of say 5% in each country leading in AI (which there's only really two) to throttle AGI until say leaders globally can be replaced by a younger generation which subscribes to threat models of extermination by AI. And even that's on the presumption that the violence is in fact counterproductive and you don't end up with a Shinzo Abe's assassin's type of case where the murder is the catalyst for political reform. Suppose Sam Altman burns to popular applause and leaders finally recognize just how unpopular AI is.
Though, I think the whole hypothetical is farcical since in reality, the only threat posed by AI seems to be wasting everyone's time and money, and flooding the internet with slop.
The morality of violence depends on the practical consequences. In consequentialist moral frameworks (including Yud's) this is trivial. In most deontological moral frameworks violence is wrong unless certain conditions are satisfied, some of which relate to practical consequences - for example in Catholic just war theory a just war must be fought with a reasonable probability of success. And in virtue ethics engaging in violence without a plan to achieve anything by it is a vice.
Sounds kind of presumptuous. Suppose the assassination succeeded in all its goals, and there's a global AI freeze. Then from a consequentalist view, oh, I suppose I supported individual violence, given it's effective after all!
That appears to be Eliezer's view. He thinks it is morally justified to use effective violence to stop probably-unaligned AI development, and that in Western democracies firebombing data centres is ineffective violence and lobbying governments to pass laws is, as libertarians would agree, effective violence.
On the relative effectiveness of randos blowing shit up vs passing laws, which is an is question, not an ought one, he is obviously correct.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link