site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

To me the whole situation is fascinating because 20 or 30 years ago there was a popular idea that if a computer could convincingly simulate human conversation, then it was intelligent and at that point you didn't even need to worry about whether the computer was conscious in the way that humans are conscious (or seem to be conscious). Kind of the Turing Test with a gloss on it.

Now that we have computers in the form of LLMs which can convincingly simulate human conversation, it seems like a trick, it seems like something important is missing; it seems like we aren't there yet. In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I believe it was William Poundstone who proposed the idea that consciousness means that an intelligent system has a model of the universe which is so sophisticated that the model contains a sophisticated representation of the system itself. Using this criterion, I would say that LLMs are not conscious at the moment. Their modeling is arguably too rudimentary.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance. That model almost certainly doesn't looks like a model that any human would recognize, such as containing a grid of 8x8 with pieces each representing a team, a position, and a set of allowed moves, which is why it makes mistakes in ways that no human would. But the fact that the model of chess - or the world - would be incomprehensible to humans and isn't based on any real empirical or experienced understanding of physics or rulesets doesn't make it not a model.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance

I disagree, another possible reason is that simply makes a good (but imperfect) guess as to what's likely to be the next move after a sequence of moves, based on all the chess games stored in its database.

So for example, if the LLM is playing black and you open e4, it's pretty likely that the LLM will respond e5 or c5 for basically the same reason it would likely output "lamb" after "Mary had a little" and "California" after "The Golden Gate bridge is located in"

I disagree, another possible reason is that simply makes a good (but imperfect) guess as to what's likely to be the next move after a sequence of moves, based on all the chess games stored in its database.

That's not another possibility, though; that's just describing actually how the LLM works for generating the model of chess (via the training) and the chessboard (via the text input) and then using the model to generate next moves (the generated text).

That's not another possibility, though; that's just describing actually how the LLM works for generating the model of chess (via the training) and the chessboard (via the text input) and then using the model to generate next moves (the generated text).

I'm not sure I understand your point, but in my view, unless the LLM outputs a textual representation of the game board for each turn, it's not actually modeling the game. Which is why there's a good chance it will make illegal moves. Note that humans do model chess, typically using a physical chessboard but not necessarily. That's why a reasonably bright teenager can quickly learn to play perfect chess in the sense of never making illegal moves.

I'm not sure I understand your point, but in my view, unless the LLM outputs a textual representation of the game board for each turn, it's not actually modeling the game.

This is where I disagree. If it's outputting correct moves at a rate greater than chance, then it's certainly got an internal model of the game in there somewhere, in order to predict moves. The model is certainly wrong and, again, likely doesn't resemble an 8x8 grid with 16 pieces on each team, with each piece having a set of legal moves, etc. But rather might involve bizarre rules like "if white starts with XX, then black responds with YY" and such. But that just makes it a wrong model - which makes it similar to most models - not not a model.

This is where I disagree. If it's outputting correct moves at a rate greater than chance, then it's certainly got an internal model of the game in there somewhere, in order to predict moves

In the strictest sense, I would agree. After all, an LLM is a large language "model."

But here's an example I used in another post: A lot of people used to play postal chess. The way it worked was you sent postcards back and forth with your moves written on them. The obvious way to play the game is when you get a move in the mail, you set up the position on a chessboard, you decide on your move, then you mail it to your opponent. But that's not the only way to play. There used to be books you could buy, I believe they were called "Chess Informants" which contained every game played in the previous 6 months between players at the International Master level or higher. So, in theory, what you could do is look through the books to find a game with the same or similar series of moves and then just play whatever the master had played in that same (or similar) position. Significantly, you could do this without knowing a single thing about chess. You could also program a computer to do this. Note that such a computer would make legal moves at a greater rate than chance. And yet most people would agree that it doesn't actually model the game in the sense that the computer system contains no internal representation of a chessboard.

So at some level, it's a question of semantics. But I think it also has real-world implications. If an LLM lacks models (or perhaps I should say "sophisticated models," then in my view (1) it's missing an important ingredient of human-level intelligence; and (2) it can't be conscious.

So at some level, it's a question of semantics. But I think it also has real-world implications. If an LLM lacks models (or perhaps I should say "sophisticated models," then in my view (1) it's missing an important ingredient of human-level intelligence; and (2) it can't be conscious.

This clears up my confusion. I agree with you in that, the current evidence of generic LLMs is that they lack "sophisticated" models of chess, for some reasonable definition of "sophisticated." Now, whether or not that means it's missing an important ingredient of human-level intelligence or can't be conscious, I don't know, and I'm not sure how anyone can know. What seems very likely to me is that, lacking a "sophisticated" model of chess (or the world, or social life, or physics, or etc.), it's lacking an important ingredient of human-emulating or human-like intelligence, but that doesn't imply that it lacks human-level intelligence. In terms of consciousness, I think the Hard Problem remains Hard.

And yet most people would agree that it doesn't actually model the game in the sense that the computer system contains no internal representation of a chessboard.

Perhaps most people would agree with that - I might, depending on what you mean by "internal representation" - certainly I doubt that the computer would have a model that could trivially show an accurate representation of each of the 64 squares and where each of the 32 pieces sit on the board and whose turn it is. But I'd say that doesn't mean that the computer isn't modeling the game or that it doesn't have some sort of internal model of the chessboard. It's just a wrong model, one that is far wronger than any typical human would have, and one that is wrong due to bizarre mistakes that no stupid human would commit.

it's lacking an important ingredient of human-emulating or human-like intelligence, but that doesn't imply that it lacks human-level intelligence.

Can you explain to me the difference between human-emulating and human-level?

More comments