site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going tp try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

To me the whole situation is fascinating because 20 or 30 years ago there was a popular idea that if a computer could convincingly simulate human conversation, then it was intelligent and at that point you didn't even need to worry about whether the computer was conscious in the way that humans are conscious (or seem to be conscious). Kind of the Turing Test with a gloss on it.

Now that we have computers in the form of LLMs which can convincingly simulate human conversation, it seems like a trick, it seems like something important is missing; it seems like we aren't there yet. In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I believe it was William Poundstone who proposed the idea that consciousness means that an intelligent system has a model of the universe which is so sophisticated that the model contains a sophisticated representation of the system itself. Using this criterion, I would say that LLMs are not conscious at the moment. Their modeling is arguably too rudimentary.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance. That model almost certainly doesn't looks like a model that any human would recognize, such as containing a grid of 8x8 with pieces each representing a team, a position, and a set of allowed moves, which is why it makes mistakes in ways that no human would. But the fact that the model of chess - or the world - would be incomprehensible to humans and isn't based on any real empirical or experienced understanding of physics or rulesets doesn't make it not a model.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance

I disagree, another possible reason is that simply makes a good (but imperfect) guess as to what's likely to be the next move after a sequence of moves, based on all the chess games stored in its database.

So for example, if the LLM is playing black and you open e4, it's pretty likely that the LLM will respond e5 or c5 for basically the same reason it would likely output "lamb" after "Mary had a little" and "California" after "The Golden Gate bridge is located in"

based on all the chess games stored in its database.

It doesn't have a "database," this is a fundamental misunderstanding of what's going on under the hood. With LLMs solving open math problems, I'm puzzled that the discourse remains around "it's just doing what it's seen before" with various levels of unsound understanding.

It doesn't have a "database," this is a fundamental misunderstanding of what's going on under the hood.

Maybe I am using the wrong word. What do you call the set of data used to train an LLM? Is it just "training data"?

I'm puzzled that the discourse remains around "it's just doing what it's seen before"

I think a more accurate statement is "It's just making predictions based on what it's seen before." Of course the word "just" might not being doing justice to the capabilities of an LLM. Because they are definitely very impressive.

But anyway, my point is that it's possible for an LLM to make legal chess moves without actually modeling chess. Do you dispute this?

What do you call the set of data used to train an LLM? Is it just "training data"?

The point is that the training data is not accessible at inference time. To the extent that being trained on chess data gives the LLM information about how to respond to a particular opening, it's because the LLM has learned that information, similarly to how a human studying openings has.

But anyway, my point is that it's possible for an LLM to make legal chess moves without actually modeling chess. Do you dispute this?

Sure, in the same way that it's possible for a human to make legal chess moves without modeling chess:

  • you could just get lucky and make random moves that happen to be legal
  • you might know how all the pieces move and that the goal is a checkmate but have basically no understanding of strategy (I am here btw)
  • the above, but you might have studied a book on chess openings and endgames

It's unclear to me at which point even a human can be said to "model" chess.

It's unclear to me at which point even a human can be said to "model" chess.

Many humans of course do openings in a somewhat similar way; they memorize a bunch. The modelling comes in that a (competent) human will have memorized a number of opening variations, and will play into one that matches what he wants for the midgame; the LLM has essentially memorized a number of opening variations and then picks one using an element of randomness.

It's certainly possible to play good chess without memorizing openings; time constraints are the main reason to do so.

You can say: "Hmm, e4 -- he wants to dominate the centre with that pawn. I need to contest it; e5 would work -- or I could do it indirectly, like Nf6? But then he will just advance the pawn and threaten my knight; seems like a wasted move. Better stick with e5."

This takes much longer than "let's go for the Italian Game", but it's the kind of modelling that you need to do once beyond your memorized opening; LLMs don't do anything like that ever.

This argument smells like the old canard of LLMs not being able to do anything novel, not being able to do anything that they haven't seen before. Again, I think this can be dismissed out of hand now that LLMs are solving open math problems.

LLMs don't do anything like that ever.

LLMs don't make plans while evaluating tradeoffs and then do things to put those plans into action? I don't know how you can even believe that in May 2026. Have you never used a coding agent and had it plan a solution, seen it analyze different approaches with their respective tradeoffs, and seen it propose the option it thinks is best?