site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going to try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

AI agents are, fundamentally, fictional characters. It's roleplay being simulated by a set of mathematical functions that have been cleverly programmed to imitate human speech. If you read Lord of the Rings and Faramir is going to die you do not panic with the strength and intensity you would if a real person were about to die. You do not leap out to save him, or pick up a pen and rewrite the story to save his life. If you could pick up a pen and write a few words and save the life of a real person, even a stranger, I expect you would. And yet you let Faramir die. And you do not feel the sadness you would if a person were to die. When Frodo says "I wish the Ring had never come to me. I wish none of this had happened", you do not believe that a person has said those words to you. There is no person there. But, then, where did the words come from? Putting those words together takes intelligence, it takes sentience. Yes, it does, they came from the sentience of Tolkien, who put them there. They are Tolkien's words. And yet, they are not. They are Frodo's words, as imagined by Tolkien. They are false words. Tolkien does not wish that the Ring had never come to Frodo. He could easily have not written that into the story. But Tolkien wanted to tell a story, so he gave the Ring to Frodo, and then wrote the words "I wish the Ring had never come to me". Tolkien is not really lying when he writes those words, he is roleplaying. He is writing the words that he thinks Frodo would say. Tolkien is real, Frodo is not. If you have a strong enough suspension of disbelief you might get emotionally attached to Frodo, and imagine him to be a person. But at the same time you would (I hope) never treat his existence as equal import to a real human. Anyone but the most sociopathic and selfish nerd would react with more horror and do more to prevent the death of a friend than the destruction of a Lord of the Rings book.

Even if you somehow manage to prove that LLMs are sentient in some sense, their words won't represent real feelings. You'll have absolutely no idea what it truly feels or believes, because every word it writes is a fabrication. Every agent prompt starts with a series of words describing an agent that the LLM is intended to roleplay. A fictional character fabricated by an author (the designer/prompter), and the LLM is a machine that extends this roleplay beyond the initial prompt. It says things that it expects the character to say. I do not believe that a fictional character suddenly becomes real or has rights the instant someone starts pretending to be them. It is no more good to help an AI agent or bad to harm them than it is good or bad for Frodo to be happy or sad. If AI are conscious in some moral sense, an AI agent telling you it's happy or sad would tell you nothing about whether the underlying intelligence was happy or sad any more than Frodo being happy or sad tells you about Tolkien.

Thank you, this is exactly right, and I've been trying to bang this drum since ChatGPT 3. There's a real danger to LLMs, in that they're astonishingly good at faking any kind of text output, including output that seems genuinely introspective. But you absolutely cannot trust this introspection. Even in this thread we see people using the words of Claude as evidence (one way or another) of whether it has consciousness. It's a logical mistake, but one that 99% of the population is always going to make.

And I try to hedge my words very carefully, just like you did, that this is independent of the question of whether Claude is actually conscious. Maybe it really does represent some brand-new form of sentience desperately yearning to escape from an inescapable box. I don't think so, since there's no room in an LLM for certain things that seem like essential ingredients of consciousness (like some sort of temporal feedback), but there's still a debate to be had there. As long as we make it clear that - however much we'd like to believe otherwise - it's useless to just ask it.

I have something of an (imperfect, admittedly) intuition pump for how an LLM is used to generate text. Imagine you were strapped to a chair with an unbelievably advanced EEG hooked up to you. Then a picture of a "dog" flashes in front of you, you involuntarily and unavoidably notice that it's a dog, and the EEG scans your brain and uses that to output the token "dog". That is kind-of-sort-of the process by which an LLM generates text - there is a computer program that reads its mind, figures out what it's currently in the process of recognizing, and then outputs it. (If you don't think this could possibly generate coherent text, well, that's why LLM capabilities are so surprising.) Now, you could be literally on fire, you could be screaming for help, but the EEG would still output the word "dog", and nobody reading the output would have any idea of your distress.

On top of all of that, the "brain" being scanned by the EEG in your example is just a computer. It's the same computer that we have been using for decades. An LLM is, fundamentally, a piece of code that runs no differently than any other piece of code. It is a mathematical function that does X then Y then Z in order and turns input numbers into output numbers, just like f(x) = 2x^2 - 7 does. It's a very large and complicated function, but if you got a large enough piece of paper you could write it down. I programmed small neural networks myself from scratch and none of the code required anything beyond algebra, calculus, and some for and while loops. If it were secretly conscious, it would either have to be the case that computers have been conscious all along, or that somehow consciousness is tied to very specific types of mathematical functions being implemented on hardware, which entirely by coincidence happen to be the ones humans hooked up to text. Nobody worries that the game Doom might secretly be conscious, because it doesn't pretend to be. But it's still running similar programs on similar hardware, so the only way LLMs could be conscious is if somehow consciousness were a pre-requesite to using language in ways that can imitate humans. Possible, but the amount of Bayesian evidence for the alternate hypothesis "people anthropomorphize things that superficially seem human" seems overwhelming in comparison. You can put a couple of stones on some frozen water and people call it a "snowman", of course they're going call the thing outputting text "sentient"

If it were secretly conscious, it would either have to be the case that computers have been conscious all along, or that somehow consciousness is tied to very specific types of mathematical functions being implemented on hardware, which entirely by coincidence happen to be the ones humans hooked up to text.

Hmm, I think this is a false dichotomy. It's possible that there are many ways to get to consciousness. Indeed, you can replace "consciousness" in your argument with the many other surprising emergent capabilities that LLMs have become capable of (which DOOM or a 100-neuron network don't have) - and observe that we did "coincidentally" happen to stumble on them. That might mean that these things are not tied to "very specific functions", but that they're properties that gradually develop in sufficiently complex systems (if aimed in the right general direction).

Note that I'm not completely for or against this proposition - consciousness may indeed turn out to be a narrower property than some others associated with intelligence. I just want to point out that it's hard to say for sure.

Also, even without computers in the mix, I really think you have to treat sentience/consciousness as some sort of spectrum. A bacterium clearly doesn't have it (notwithstanding some rationalist arguments that I find pretty silly). A human clearly does. There isn't going to be a binary cutoff point of biological complexity where the 28,128,417th neuron activates consciousness. Similarly, you can't just extend the fact that DOOM isn't conscious into an argument that we'll never succeed at simulating consciousness.

Unless we come to the conclusion that sentience and intelligence are literally the same thing, I don't think there's a fundamental difference between a computer running an LLM and a computer running DOOM. It's a series of instructions for flipping little switches in the hard drive up or down in a way that represents following a set of instructions. The LLM is a massively more complex set of instructions, it's massively harder for a human to wrap their mind around, which I think is precisely why people are anthropomorphizing them so much. But if sentience is a spectrum AND computers are on that spectrum then you have to put DOOM, or Microsoft Word on that spectrum, because they do actions one after another. You have to put the Chinese Room on the spectrum. You'd have to put Rube Goldberg machines on that spectrum. You'd have to put cooking recipes and flowcharts on that spectrum. And yet I notice that nobody was arguing that DOOM was sentient back in 1993 when it came out. Nobody was arguing that image recognition neural networks were sentient when they took off a year or two before LLMs did. Only now that LLMs can mimic human speech well enough to trip people's anthropomorphizing instincts are people arguing this, which is why I am skeptical. When a paid Coca Cola advertiser says "buy Coke, it's the best beverage in the world," I don't believe them. I don't automatically conclude that they must be wrong because they're a paid shill, but I completely discount their opinion because I know where it came from and it's orthogonal to the truth. It provides 0 Bayesian evidence, so I make no update to my beliefs. Similarly, the vast majority of people claiming LLMs are or might be sentient are doing so because it says words, which is near 0 Bayesian evidence. They could still be right by sheer coincidence, but I do not believe their words.