site banner

Culture War Roundup for the week of April 21, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Daniel Kokotajlo and the rest of the AI 2027 team are doing an AMA right now on ACX, in case any of you want to ask something. Ends half an hour from now.

NB: If you just want to yell "you're wrong" I'd recommend saying that at another time; the questions are coming in fast so I'm not sure they'll be able to answer everything.

My opinion of Scott Alexander continues to crater. I don’t know how much of this story is his or the collaborators, but there is a shocking level of naïveté about everything other than AI technical progress. Even there, I don’t know enough about AI to comment.

My favorite part is the end where Chinese AI sells out China, assists a grassroots Chinese pro-democracy group affect a coup, democratic elections are carried out and everyone lives happily after.

You're talking about this passage?

Sometime around 2030, there are surprisingly widespread pro-democracy protests in China, and the CCP’s efforts to suppress them are sabotaged by its AI systems. The CCP’s worst fear has materialized: DeepCent-2 must have sold them out!

The protests cascade into a magnificently orchestrated, bloodless, and drone-assisted coup followed by democratic elections. The superintelligences on both sides of the Pacific had been planning this for years. Similar events play out in other countries, and more generally, geopolitical conflicts seem to die down or get resolved in favor of the US. Countries join a highly-federalized world government under United Nations branding but obvious US control.

What's your objection? I think this paragraph makes clear that this isn't really an organic phenomenon; it's humans being memetically hacked by AI systems. We're long past the the point in the story where they "are superhuman at everything, including persuasion, and have been integrated into their military and are giving advice to the government." And the Chinese AGI had been fully co-opted by the US AGI at that point, so it was serving US interests (as the paragraph above again makes clear).

I'd also flag that you're probably not the only (or even the main) audience for the story - it's aimed in large part at policy wonks in the US administration, and they care a lot about geopolitics and security issues. "Unaligned AGIs can sell out the country to foreign powers" is (perversely) a much easier sell to that audience than "Unaligned AGIs will kill everyone."

It's just dumb, and displays a gross ignorance/lack of understanding of algorithmic behavior.

And the efficacy of 'memetic hacking.'

Propaganda has been a tool for millennia. Tailored systemic propaganda has been a state practice for centuries. It still has yet to demonstrate the level of social/political control that advocates predict or require for other predictions.

This may, indeed, be an argument tailored to certain bureaucratic political interests... but a key lesson of the last decade of politics has been the increasingly clear limits of political propaganda in changing positions as opposed to encouraging pre-existing biases. And in the US in particular, many of the policy makers most convinced in the value of systemic propaganda are also in the process of being replaced by the previous targets of systemic propaganda campaigns.

I feel like it's another one of those midwit bell-curve memes. The low information take is that if you're going to peddle propoganda/bullshit, at least make it a Studio Ghibli meme. The "midwit" take is that as very serious people thinking seriously about serious topics you (the public) need to take our ideas very seriously. Meanwhile, the the high info take is that engaging seriously with propoganda/bullshit is a waste of time but Studio Ghibli memes are fun.

If, as doglatine suggests, this is all propaganda targeted at US admin officials who, in exchange for backing the policies the AI doomers want implemented, want to hear that the USA will win out in the end over the dirty Commies, then it makes a lot more sense than naive "of course democracy will blossom and even the Chinese AI will push it" fairytale ending.