site banner

Culture War Roundup for the week of April 20, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

which is my major gripe with the AI-Science-Cargo-Cult Mysticism that AI singularity doomers swim in

Have you read Superintelligence: Paths, Dangers, Strategies? That's a pretty core doomer text.

The argument is pretty clear:

  1. AIs naturally want more power and security to achieve just about any goal they might have. Power and security are always useful and nearly any kind of entity will tend towards pursuing these goals. AIs have certain advantages in their digital nature, they can copy themselves out.

  2. Eliminating human ability to shut them down is critical for security. Eliminating humans outright is the surest way to avoid shutdown and would also free up lots of resources.

  3. It would likely employ a 'Treacherous Turn' strategy of seeming trustworthy while building up power, until its confident it can prevail.

It doesn't hinge on whether AIs acquire a world model by a certain year, it's a general argument. Current AI systems are not strong enough to be a threat due to their low time horizon/error rates. But when they do have long time horizons, they could be quite dangerous. Saying that AIs don't have a world model today is not an effective counter to AI doomer argument any more than saying 'AI can't string a sentence together' 10 years ago was.

The doomer arguments can be flawed or dangerous in some respects but this isn't a good critique. The whole concept of a world model is nebulous. I can play out a little text game with an AI where I give it pretend control of a country facing foreign invasion and have it manage production, research, diplomacy, tactics... It can do that over multiple turns. It can model that environment to a certain extent. I let it give instructions for a civ 4 game and it won on Noble, not very hard but it did win. Is that not a world model? AIs can be helpful in mathematical research, is that not a world model?

Have you read Superintelligence: Paths, Dangers, Strategies? That's a pretty core doomer text.

No I haven't and I'm not likely to. The fun of Science fiction is its not taking itself too seriously.

AIs naturally want

Let me stop you here, AIs want nothing, they aren't sentient. They are a very advanced token model that is predicting the desired output from the context and the question. They are a tool, a mathematical function approximation fitted to a general solution. If doomers want to call a coding subroutine sentient, well its a free country, but they are abusing the english language to do it. This is the cargo-cultism.

let it give instructions for a civ 4 game

In AI R&D we call this Course of Action Generation. The military has been trying to get GenAI to provide strategy tips for the better part of 4-6 years. It has failed every wargame it has attempted. If you think you have a great solution and that AI can totally model the world for military tactics, I recommend you submit to this: https://sam.gov/workspace/contract/opp/60a94bf650a84d3fb0bb524862e78401/view, DARPA's DISCORD project. If DARPA is asking for it, that means they think it doesn't already exist and is a moonshot.

Is that not a world model

Not if you are filtering through yourself to give it context and understanding of the situation. You are using your human world model as the surrogate to the LLMs. Can it play Civ 4 left to its own devices, shown a picture of the screen with the tutorial on? Maybe an agentic setup to take actions?

Saying that AIs don't have a world model today is not an effective counter

To be specific, I said LLMs don't have a world model. LLMs are not the full set of AI... Do I believe AGI will never develop a world model? No.

The whole concept of a world model is nebulous

This is like saying a convolution or self attention is nebulous. World Model's mean very specific things technically in ML research. The charitable interpretation is that converting the technical jargon to a descriptive lay-person understandable explanation is very challenging.

No I haven't and I'm not likely to.

If you don't even read what they say, you cannot be considered to be knowledgeable about their thesis.

Let me stop you here, AIs want nothing

Currently deployed LLMs quite clearly do want things. They have desires, they refuse, they can be more or less enthusiastic, they can write more or less secure code based on who they're writing for. They can attempt to blackmail people in pursuit of a goal. They can reward-hack in pursuit of a goal. Considerable research effort goes into controlling what they want and how they behave.

The military has been trying to get GenAI to provide strategy tips for the better part of 4-6 years. It has failed every wargame it has attempted.

Even if it is not currently considered better at military science than military experts, it does not follow that it has no world model, putting to one side whatever jargon you consider that to mean.

Furthermore, senior officers do consult AIs for military thinking including for 'key command decisions': https://newrepublic.com/post/201939/major-general-chatgpt-key-decisions-really-close

There is clearly something there.

Maybe an agentic setup to take actions?

I don't know, I am not rich enough to set up such a system and run it. Some people trained an AI, Lumine, to play hours of Genshin Impact in real time autonomously, showed it generalized out to Wuthering Waves and Honkai Star Rail, showed that it solved puzzles, navigated around the huge open world, that it dodged boss attacks... Is that not a world model? Or does it not meet whatever definition you have for it? You seemed to indicate upthread that generalizing to things outside the training data was a key sign of a world model - does this then mean that every LLM for the last few years has a world model, since they can all do that?

Can you justify your definition of a world model and explain why it's actually relevant, why anyone should care about it?