site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going to try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

To me the whole situation is fascinating because 20 or 30 years ago there was a popular idea that if a computer could convincingly simulate human conversation, then it was intelligent and at that point you didn't even need to worry about whether the computer was conscious in the way that humans are conscious (or seem to be conscious). Kind of the Turing Test with a gloss on it.

Now that we have computers in the form of LLMs which can convincingly simulate human conversation, it seems like a trick, it seems like something important is missing; it seems like we aren't there yet. In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I believe it was William Poundstone who proposed the idea that consciousness means that an intelligent system has a model of the universe which is so sophisticated that the model contains a sophisticated representation of the system itself. Using this criterion, I would say that LLMs are not conscious at the moment. Their modeling is arguably too rudimentary.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance. That model almost certainly doesn't looks like a model that any human would recognize, such as containing a grid of 8x8 with pieces each representing a team, a position, and a set of allowed moves, which is why it makes mistakes in ways that no human would. But the fact that the model of chess - or the world - would be incomprehensible to humans and isn't based on any real empirical or experienced understanding of physics or rulesets doesn't make it not a model.

Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance.

If I trained a Markov model on the textual representation of thousands of games, and constrained it to only play legal moves, I bet it'd do better than random chance, but worse than a classic min max engine, which has defined metrics for what "winning" means. Is that an internal model, or just "usually a player castles after moving their knight and bishop" correlation?

I paid a decent amount of attention when they did the LLM-vs-LLM chess tournament. You could read a bunch of the 'thinking' tokens (I use single quotes not to make fun of the term, but to only note that it is genuinely difficult to unpack what the word does/does not mean besides being conventionally used for a particular set of tokens). Some of them were genuinely impressive. Some were outright gibberish. Obviously, they were typically better in the opening phase of the game, where there is likely gobs of information on the internet/in books spelling out the reasoning behind particular moves. But that is not to say that it was never impressive later in the game. Of course, that competition used a pretty significant harness that objectively retained the true state. To what extent that matters and/or can be overcome is an ongoing question.

One possibility for trying to make progress in testing this distinction is to consider chess variants, particularly novel ones that are very unlikely to have anything in the training data. 960 is almost this, but something about it is at least in the training data, even if very minimal in comparison; to start, I don't even know that I'd go that far. "Let's play a game of chess where the knights and the bishops switch starting places," might be a good start. Harder versions would be, "Let's play a game of chess where the knights move like bishops and the bishops move like knights." It's logically the same, but you have to keep track of a difference in notation as well as reasoning. I imagine this would actually make the game harder for most people, since they're so used to thinking in one way. Good players will likely make more reasoning mistakes in calculating longer lines, but will probably be able to double-check well enough immediately before making a move that they're not likely to attempt all that many illegal moves (unless they are pretty severely time-constrained). Classic engines would have essentially no degradation in performance (because you'd have to bake in the difference). I'm not quite sure how to think about what kind of degradation to expect from LLMs or, having observed some level (or no) degradation from them, how one would interpret it; but I'd be interested to see. One could get a bit more whacky, like, "Knights can no longer simply jump over pieces; at least one of the two possible L directions needs to be open," possibly also throw in for the fun of it, "Bishops may now jump over one piece along their route," or something. I played Knightmare Chess long ago when I was young. There are a ton of tweaks you can do to mess with stuff. For humans, it is fun to keep track of various rule modifications and try to reason through it.

At the very least, if LLMs absolutely tank in these sorts of variants, just spamming illegal moves all the time, while humans are able to at least moderately cope, it would be some amount of useful information. Of course, one must always have the disclaimer that it is certainly possible that with enough progress and compute, LLMs may even outperform humans. We sort of just don't know.