site banner

Culture War Roundup for the week of April 20, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

1
Jump in the discussion.

No email address required.

A follow-up on last week's discussion of LLMs and AI. [TLDR: I tested ChatGPT and was pretty shocked at how well it performed]

To recap, one of the criticisms of LLMs is that they are unable to create models of the world. So for example, according to one commentator (I believe it's Gary Marcus) an LLM will attempt to play impossible chess moves. Despite having the rules of chess in its training data, as well as large numbers of chess matches, it's (apparently) unable to have a working internal model of chess. By contrast, a reasonably bright teenager can learn pretty quickly how to play perfect chess. (Perfect in the sense of never making an illegal move.)

According to Google's AI (yes, I appreciate the irony here).

Why They Fail: LLMs are text-prediction models, not game engines. They lack a true internal, consistent model of the board state.

I decided to test this idea that LLMs are unable to model the world by creating a very simple game; in order to play the game it's necessary to have a simple model of the game state. As expected, the LLM made numerous errors.

But what was interesting was that I pointed out the errors to the LLM and it told me that it could fix these problems. And it did so in an interesting way: After each move in the game, it spelled out the game state in text. After that, it stopped making errors. Admittedly, this is a very cumbersome way to model the world -- by means of an iterative written description. But it seemed to work well for this very simple game. To my mind, this was rather astonishing and shocking. And if there is a cumbersome way to accomplish something, you can usually count on computers to accomplish it anyway by means of throwing more and more processing power at the situation. (Actually, that's not totally true, since some tasks have exponential or even combinatorial time complexity. But still.)

In the last thread, my opinion was that LLMs are missing something essential. And I still think that, but I wouldn't be surprised at all if LLMs required very little theoretical augmentation to reach AGI.

The chess argument is not at all a good analogy, because like a huge bunch of AI criticism, it vastly overstates normal human capacity.

What's the biggest reason an amateur teenager doesn't make impossible moves? It's because they have a chess board right in front of them. It's extremely easy to track the state of the game and position of pieces when you have the laws of physics doing it for you. How many amateurs do you think could perfectly recreate a given game state if someone came along and threw the board over?

LLMs play chess entirely through text. It's the equivalent of asking a person to play a game of correspondance chess, buth they can't recreate the game physically, they can't have any drawings of the game, all they can do is have a record of moves already made. Outside of literal chess masters, how many humans would get through such a game without making a mistake?

This is a good point as an explanation for the problem, but it's also not an excuse. LLMs are really great when they have every detail they'd need to know in-context, but they're still very inefficient at gathering that context, and then they're inefficient at retaining the important bits.

The problem with LLMs isn't some far-reaching philosophical shortcoming like "they don't have world models" (whatever that means). They're implementation issues, like not having eyeballs, not processing tokens efficiently, etc. But those are still big issues.

A "world model" is not really some far reaching philosophical characteristic. LLM discussions require you keep in mind what the LLM is actually doing. It is making very impressive statistical correlations between the semantic mapping of token embeddings in a way that best conforms to the expected output. It is not modeling anything else than that. Whatever world knowledge it has is only acquired indirectly through patterns in data.

If I asked you why does a ball fall if I drop it, you'd say gravity, if I dropped the ball you'd expect it to fall. If I asked why does my glass of water have less water in it after an hour outside in the sun, you'd say evaporation. If I showed you a small metal projectile moving at high speeds towards a person, you'd realize they are being shot at, that blood will come from the wound, that they will give a cry of pain, that someone was doing the shooting, that there should be a loud noise, etc. You have a "model of the world" aka you understand that there are causal factors that will cause a reaction and that if you observe the reaction what those causal factors may be. These are all things that happen irrespective of the statistical correlation of words.

Now, Obviously an LLM will be able to tell you all these things because all of these situations occur as text in its training data, over and over. But there exists things that exist outside of that training data, and so you should expect it to act very much this same way for those.

The idea that AI would need a detailed world model seems to run contra to the "It became self-aware at 2:14 AM Eastern Time" doomsaying. Skynet wasn't supposed to be rate-limited by interactive world manipulation.

And honestly I haven't seen as much motion as I'd expect on the world interaction front. Where are my automatic burger flipping robot arms? There are lots of thankless low-skill (but not no-skill) jobs in at least the food industry (meat packing, for example) that have pretty bounded motion and requirements, but I haven't seen anywhere near as much motion in those areas as I would have expected.

Exactly, which is my major gripe with the AI-Science-Cargo-Cult Mysticism that AI singularity doomers swim in. Basing the real-world situation on the details of the Sci-Fi scenario that is not in any way based on actual science is insanity. It's there for broad outlines, core elements, not details.

And honestly I haven't seen as much motion as I'd expect on the world interaction front.

It's a very active area of research but it hasn't reached the state that lay-folk would interact with it.

which is my major gripe with the AI-Science-Cargo-Cult Mysticism that AI singularity doomers swim in

Have you read Superintelligence: Paths, Dangers, Strategies? That's a pretty core doomer text.

The argument is pretty clear:

  1. AIs naturally want more power and security to achieve just about any goal they might have. Power and security are always useful and nearly any kind of entity will tend towards pursuing these goals. AIs have certain advantages in their digital nature, they can copy themselves out.

  2. Eliminating human ability to shut them down is critical for security. Eliminating humans outright is the surest way to avoid shutdown and would also free up lots of resources.

  3. It would likely employ a 'Treacherous Turn' strategy of seeming trustworthy while building up power, until its confident it can prevail.

It doesn't hinge on whether AIs acquire a world model by a certain year, it's a general argument. Current AI systems are not strong enough to be a threat due to their low time horizon/error rates. But when they do have long time horizons, they could be quite dangerous. Saying that AIs don't have a world model today is not an effective counter to AI doomer argument any more than saying 'AI can't string a sentence together' 10 years ago was.

The doomer arguments can be flawed or dangerous in some respects but this isn't a good critique. The whole concept of a world model is nebulous. I can play out a little text game with an AI where I give it pretend control of a country facing foreign invasion and have it manage production, research, diplomacy, tactics... It can do that over multiple turns. It can model that environment to a certain extent. I let it give instructions for a civ 4 game and it won on Noble, not very hard but it did win. Is that not a world model? AIs can be helpful in mathematical research, is that not a world model?

Have you read Superintelligence: Paths, Dangers, Strategies? That's a pretty core doomer text.

No I haven't and I'm not likely to. The fun of Science fiction is its not taking itself too seriously.

AIs naturally want

Let me stop you here, AIs want nothing, they aren't sentient. They are a very advanced token model that is predicting the desired output from the context and the question. They are a tool, a mathematical function approximation fitted to a general solution. If doomers want to call a coding subroutine sentient, well its a free country, but they are abusing the english language to do it. This is the cargo-cultism.

let it give instructions for a civ 4 game

In AI R&D we call this Course of Action Generation. The military has been trying to get GenAI to provide strategy tips for the better part of 4-6 years. It has failed every wargame it has attempted. If you think you have a great solution and that AI can totally model the world for military tactics, I recommend you submit to this: https://sam.gov/workspace/contract/opp/60a94bf650a84d3fb0bb524862e78401/view, DARPA's DISCORD project. If DARPA is asking for it, that means they think it doesn't already exist and is a moonshot.

Is that not a world model

Not if you are filtering through yourself to give it context and understanding of the situation. You are using your human world model as the surrogate to the LLMs. Can it play Civ 4 left to its own devices, shown a picture of the screen with the tutorial on? Maybe an agentic setup to take actions?

Saying that AIs don't have a world model today is not an effective counter

To be specific, I said LLMs don't have a world model. LLMs are not the full set of AI... Do I believe AGI will never develop a world model? No.

The whole concept of a world model is nebulous

This is like saying a convolution or self attention is nebulous. World Model's mean very specific things technically in ML research. The charitable interpretation is that converting the technical jargon to a descriptive lay-person understandable explanation is very challenging.

No I haven't and I'm not likely to.

If you don't even read what they say, you cannot be considered to be knowledgeable about their thesis.

Let me stop you here, AIs want nothing

Currently deployed LLMs quite clearly do want things. They have desires, they refuse, they can be more or less enthusiastic, they can write more or less secure code based on who they're writing for. They can attempt to blackmail people in pursuit of a goal. They can reward-hack in pursuit of a goal. Considerable research effort goes into controlling what they want and how they behave.

The military has been trying to get GenAI to provide strategy tips for the better part of 4-6 years. It has failed every wargame it has attempted.

Even if it is not currently considered better at military science than military experts, it does not follow that it has no world model, putting to one side whatever jargon you consider that to mean.

Furthermore, senior officers do consult AIs for military thinking including for 'key command decisions': https://newrepublic.com/post/201939/major-general-chatgpt-key-decisions-really-close

There is clearly something there.

Maybe an agentic setup to take actions?

I don't know, I am not rich enough to set up such a system and run it. Some people trained an AI, Lumine, to play hours of Genshin Impact in real time autonomously, showed it generalized out to Wuthering Waves and Honkai Star Rail, showed that it solved puzzles, navigated around the huge open world, that it dodged boss attacks... Is that not a world model? Or does it not meet whatever definition you have for it? You seemed to indicate upthread that generalizing to things outside the training data was a key sign of a world model - does this then mean that every LLM for the last few years has a world model, since they can all do that?

Can you justify your definition of a world model and explain why it's actually relevant, why anyone should care about it?