site banner

Culture War Roundup for the week of April 20, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

A follow-up on last week's discussion of LLMs and AI. [TLDR: I tested ChatGPT and was pretty shocked at how well it performed]

To recap, one of the criticisms of LLMs is that they are unable to create models of the world. So for example, according to one commentator (I believe it's Gary Marcus) an LLM will attempt to play impossible chess moves. Despite having the rules of chess in its training data, as well as large numbers of chess matches, it's (apparently) unable to have a working internal model of chess. By contrast, a reasonably bright teenager can learn pretty quickly how to play perfect chess. (Perfect in the sense of never making an illegal move.)

According to Google's AI (yes, I appreciate the irony here).

Why They Fail: LLMs are text-prediction models, not game engines. They lack a true internal, consistent model of the board state.

I decided to test this idea that LLMs are unable to model the world by creating a very simple game; in order to play the game it's necessary to have a simple model of the game state. As expected, the LLM made numerous errors.

But what was interesting was that I pointed out the errors to the LLM and it told me that it could fix these problems. And it did so in an interesting way: After each move in the game, it spelled out the game state in text. After that, it stopped making errors. Admittedly, this is a very cumbersome way to model the world -- by means of an iterative written description. But it seemed to work well for this very simple game. To my mind, this was rather astonishing and shocking. And if there is a cumbersome way to accomplish something, you can usually count on computers to accomplish it anyway by means of throwing more and more processing power at the situation. (Actually, that's not totally true, since some tasks have exponential or even combinatorial time complexity. But still.)

In the last thread, my opinion was that LLMs are missing something essential. And I still think that, but I wouldn't be surprised at all if LLMs required very little theoretical augmentation to reach AGI.

The chess argument is not at all a good analogy, because like a huge bunch of AI criticism, it vastly overstates normal human capacity.

What's the biggest reason an amateur teenager doesn't make impossible moves? It's because they have a chess board right in front of them. It's extremely easy to track the state of the game and position of pieces when you have the laws of physics doing it for you. How many amateurs do you think could perfectly recreate a given game state if someone came along and threw the board over?

LLMs play chess entirely through text. It's the equivalent of asking a person to play a game of correspondance chess, buth they can't recreate the game physically, they can't have any drawings of the game, all they can do is have a record of moves already made. Outside of literal chess masters, how many humans would get through such a game without making a mistake?

This is a good point as an explanation for the problem, but it's also not an excuse. LLMs are really great when they have every detail they'd need to know in-context, but they're still very inefficient at gathering that context, and then they're inefficient at retaining the important bits.

The problem with LLMs isn't some far-reaching philosophical shortcoming like "they don't have world models" (whatever that means). They're implementation issues, like not having eyeballs, not processing tokens efficiently, etc. But those are still big issues.

A "world model" is not really some far reaching philosophical characteristic. LLM discussions require you keep in mind what the LLM is actually doing. It is making very impressive statistical correlations between the semantic mapping of token embeddings in a way that best conforms to the expected output. It is not modeling anything else than that. Whatever world knowledge it has is only acquired indirectly through patterns in data.

If I asked you why does a ball fall if I drop it, you'd say gravity, if I dropped the ball you'd expect it to fall. If I asked why does my glass of water have less water in it after an hour outside in the sun, you'd say evaporation. If I showed you a small metal projectile moving at high speeds towards a person, you'd realize they are being shot at, that blood will come from the wound, that they will give a cry of pain, that someone was doing the shooting, that there should be a loud noise, etc. You have a "model of the world" aka you understand that there are causal factors that will cause a reaction and that if you observe the reaction what those causal factors may be. These are all things that happen irrespective of the statistical correlation of words.

Now, Obviously an LLM will be able to tell you all these things because all of these situations occur as text in its training data, over and over. But there exists things that exist outside of that training data, and so you should expect it to act very much this same way for those.

The idea that AI would need a detailed world model seems to run contra to the "It became self-aware at 2:14 AM Eastern Time" doomsaying. Skynet wasn't supposed to be rate-limited by interactive world manipulation.

And honestly I haven't seen as much motion as I'd expect on the world interaction front. Where are my automatic burger flipping robot arms? There are lots of thankless low-skill (but not no-skill) jobs in at least the food industry (meat packing, for example) that have pretty bounded motion and requirements, but I haven't seen anywhere near as much motion in those areas as I would have expected.

And honestly I haven't seen as much motion as I'd expect on the world interaction front.

Give it a year or three, these things just recently started getting good enough for droids to be viable and nobody wants to fuck it up.