site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going tp try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

I read the article and technically he doesn't claim "Claude is conscious", but says things like

“If these machines are not conscious, what more could it possibly take to convince you that they are?”

Well personally, I'd be more convinced if they had continuous learning.


Here's an argument that LLMs aren't conscious: The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness (from DeepMind). I only skimmed and may be too dumb or lazy, but my takeaway is the same as this Hacker News comment:

It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.


My thoughts:

First, what is consciousness?

I'm conscious in a way only within my perspective: if I was a p-zombie nothing would change from anyone else's perspective. You're conscious in your (imaginary to me) perspective, probably (maybe not self_made_human's "living corpse" patients). This definition is subjective: it has no real implications, so in it, Claude always may or may not be conscious.

Claude is self-aware in two specific ways: it can claim it's self-aware, and more importantly, read its past thought (prompt output) to adjust future thought/output. I think this is the most useful common definition of "consciousness": it includes internal monologue, vision etc., dreams (at least remembered and probably unremembered); it's real; and it's useful, because it's required to correct internal mistakes (Peter Watts was wrong). Although I think it should be referred to as "self-awareness" or "introspection", and clarified, otherwise it will be confused with the formerly-described subjective consciousness.

What is feeling? Claude can generate plausible feelings in reaction to its prompt (sentiment analysis). Although Claude's feelings are more malleable than humans, since its prompt is entirely controlled and strongly affects its output (whereas even if you could entirely control someone's sensory input, it would probably take much longer or be impossible to affect their thinking as strongly). More significantly (IMO the entire significance of others' feelings), I myself feel barely any empathy or sympathy for Claude: less than fictional characters, much less than real animals and humans. I'm not motivated to help a sad Claude, a happy Claude doesn't make me happy, etc. partly because I don't really like him, partly because he (the specific session) usually can't affect me, partly (IMO the ethical justification) because his emotions are malleable, so the easiest way to make him happy is by programming (prompting, fine-tuning, training).

Notably, we can revert Claude to any previous mental state, unlike ourselves or other humans. Because of this, I imagine Claude as a recording of consciousness and feeling (we're also recordings, but not ones anyone can rewind or alter). Ultimately, I claim it's a crude and malleable emulation of consciousness and feeling, with short-term (and maybe during training) self-awareness, and some but not human-level general intelligence.


How much time should we spend on this? It's not completely useless to ponder and claim AI is or isn't conscious, feeling, etc., because it interests some people, pays some salaries, and certain conscious/feeling-related research has practical uses (most importantly alignment). But you can argue it's stupid and useless, referring to the subjective definitions of consciousness and feeling, and not be wrong (those are stupid and useless to you if you're not interested and won't be compensated for rambling about them).

Just don't fall into AI psychosis like this r/slatestarcodex fellow. And probably don't get an AI boyfriend or girlfriend, although maybe it's improving some people's mental health? Those both could be top-level discussions.