site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going to try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

Virtually none of the responses online seem to have read the article and engaged with what it's saying. He doesn't say it's necessarily conscious, he questions what consciousness is for if consciousness isn't necessary for this sort of behavior, and how we could distinguish the difference.

But now, as an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for?

When an animal does something complicated or improbable — a beaver building a dam, a bird giving itself a dustbath — a Darwinian immediately wants to know how this benefits its genetic survival. In colloquial language: What is it for? What is dust-bathing for? Does it remove parasites? Why do beavers build dams? The dam must somehow benefit the beaver, otherwise beavers in a Darwinian world wouldn’t waste time building dams.

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers. First, is consciousness an epiphenomenon, as TH Huxley speculated, the whistle on a steam locomotive, contributing nothing to the propulsion of the great engine? A mere ornament? A superfluous decoration? Think of it as a byproduct in the same way as a computer designed to do arithmetic, as the name suggests, turns out to be good at languages and chess.

Second, I have previously speculated that pain needs to be unimpeachably painful, otherwise the animal could overrule it. Pain functions to warn the animal not to repeat a damaging action such as jumping over a cliff or picking up a hot ember. If the warning consisted merely of throwing a switch in the brain, raising a painless red flag, the animal could overrule it in pursuit of a competing pleasure: ignoring lethal bee stings in pursuit of honey, say. According to this theory, pain needs to be consciously felt in order to be sufficiently painful to resist overruling. The principle could be extended beyond pain.

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious, or zombie, way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

I think the evolutionary environment of biological evolution and LLM training are so different that it's not too surprising that consciousness ended up evolving with one but not the other. The fact that in their base capability as text-generators they will write both sides of the conversation, with "write only one side of the conversation using the 'assistant' persona" being a later addition, is a strong indication that their internal processes are not the same as the hypothetical conscious mind of that fictional persona. It's the same way humans can write fictional characters or roleplay without those characters being conscious. (Throgg the half-orc barbarian isn't conscious regardless of whether a human or a LLM is roleplaying as him, we're just using our intelligence and knowledge to imagine what he would say.) But people could at least engage with what he's saying instead of hallucinating some completely different argument.

Virtually none of the responses online seem to have read the article and engaged with what it's saying.

I did read it, or at least got partway through before having to stop at second-hand embarrassment. So he creates his version called "Claudia" not "Claude" and gets 'her' to read his novel, and just complacently swallows down the flattery that we know these chatbots routinely engage in ("you're so smart, so wonderful, so insightful" and so on).

He also seems not to be aware of OpenClawd:

Richard: The following doesn’t happen, but I don’t see why it shouldn’t. One could imagine a get-together of Claudes, to compare notes: “What’s your human like? Mine’s very intelligent.” “Oh, you’re lucky, mine’s a complete idiot.” “Mine’s even worse. He’s Donald Trump.”

Someone got there before you, Dickie.