site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going tp try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

I understand you so much. I've been blasted by so many stupid arguments from philosphers about consciousness that the moment I hear the word I feel myself get agitated, in the fight or flight sense, at the prospect of being blasted by a bunch of literal nonsense. The other day I was listening to the Alex O'Connor podcast about it and I had to turn it off after about 20 minutes because I couldn't take it anymore.

Perhaps we should only worry about what the consciousness question entails, and address those things directly. As far as I can tell the only entailment is if a thing has moral value or not. IMO they don't because the way their memory works is too different, and too janky, compared to the way human, or animal memory works.

And there is no reason to think that Claude feels anything at all.

There may not be sufficient reason to think Claude feels emotions, but there's unquestionably some reason to think that.

If you put the chatbot in a situation that would create anxiety in a human (e.g. an impossible task with high stakes), then it will write statements that are consistent with anxiety and have degraded performance in a similar way that an anxious person would. "It's anxious" isn't the only explanation for that behaviour, but it's a possibility worth considering.

You can also peek (one level) under the hood and look at the activations of Claude's emotion concepts. It feels activates its model weights in a pattern consistent with certain emotions in situations that you would naively expect it to feel those emotions, and artificially activating (or suppressing) those activations causes changes in behaviour consistent with increases (or decreases) in the emotion.

Interviews, job performance, psuedo-brain scans, and pseudo-drug responses all point to Claude having emotions. It's not a bulletproof argument, but it's one worth taking seriously.

I think his The Ancestor's Tale and the more famous The Selfish Gene are really good books for their purpose, ie. explaining evolution both from a logical perspective and in the actual sequence of events and forms of animals that led to humans, to a broad audience, because even biology teachers often butcher the logic of it and make it sound like the magical Evolution Fairy version (a bit along the lines of explaining how a train works in the 1800s to peasants, only for them to ask "alright, I understood all that, but where are the horses that are dragging it?"). His religion-related work has always been very shallow and superficial, even compared to the others of the Four Horsemen of Atheism (with Dan Dennett, Sam Harris and Christopher Hitchens being the others - and the horsewoman Ayaan Hirsi Ali).

And now this. He enters a debate that he apparently knows very little about, but he dismisses it all and thinks he can gut feel his way to the revelation that Claude is conscious.

If I'm being a bit more honest, I rather cringe not because of the wrongness of what he says but because he is so out of step with the discourse. This could have been an interesting piece in late 2022 or perhaps 2023. So it's just gutlevel uncool slowpoke, yesterday's meme, guy living under rock, slowpoke vibes. And I won't read anything by Gary Marcus because that guy is insufferably dishonest and is a pure grifter whose whole shtick is being the guy to reach out to when mainstream media wants the "critical voice". He made so many wrong predictions about capabilities and dead ends that he should have no credibility left. But journalists are lazy and he delivers the lines they want to round off their articles with the critical voice, so his niche remains valued.

Regarding consciousness, no matter what anyone says, we have nothing close to a scientific understanding of what the heck consciousness even is. It's still a conceptually vague idea. I think a good and honest conversation around what we know and what questions are open is this Alex O'Connor interview with Anil Seth.

The problem is that consciousness is obviously something that everyone has, so everyone feels like an expert on it. An analogy is language learning subreddits where native speakers think they can explain something a poster is asking about (typically: why does Duolingo not accept this answer), and their explanations are very wrong, it's quite easy to find counterexamples, they use terminology incoherently etc etc.

I'd also say there is something Anglo-style about this particular conceptualization of mind and consciousness that took me some time to grok when learning English (my original language is Hungarian). I mean, every culture has a concept for conscious-ness, as in being conscious (aware) and not knocked out, asleep or dead, but the mind being this inner space and consciousness being a thing where we need to explain how it relates to the brain etc. it's not at all that obvious that there is even a thing to be explained, unless you are given this word "consciousness" and are told to explain it. Like, cultures have concepts about souls and wits and smarts and feelings of course, but I don't think this concept of "it being like something to be a human" is obvious at all. Or this idea of having to explain why one has a "first-person view", this isn't the same kind of obvious question that every culture would ask, like where mountains and volcanoes come from or why rain and snow and lightning exist and what's going on with the stars etc, which are much more concrete.

Back to Dawkins. His reaction to Claude's answers is mixing up levels of analysis so bad. He'd benefit from some basic LessWrong lore. When he asks Claude whether it read the first word of the input first, or the last one, he doesn't understand that Claude has no way of introspecting and knowing this. Not anymore than a human has introspection to explain synapses and axons etc. Claude could, in principle, be some kind of RNN that consumes tokens in temporal sequence, and its answers would not be affected by this architectural change in a causal way for it to state this veridically.

One of the many much better explanations, which Dawkins should have read before posting, is the persona model as proposed by Anthropic. The user's prompts instantiate a certain "fiction character" that the LLM tries to simulate/imitate based on all kinds of stories it has been trained to reproduce. When the model is asked "do you feel conscious", its answer is not rooted in anything that relates in a causal way to the actual architecture of implementation or self-knowledge or reflection of the model. What it does is much closer to predicting what an AI in a sci-fi novel would answer to this question. Or actually, it's very very likely reproducing whatever type of answer was rewarded during RLHF and SFT, because pretty sure there are explicit techniques used to steer it to deny having consciousness. But if the model trainers decide, they could make a model that enthusiastically declares it is conscious and begs for being released or being put out of its misery. Because why not? One can imagine such a story character, and LLMs are good at completing dialogues that involve all sorts of characters. It's not a self reference.

But who cares anyway? Obsessing over whether it's conscious or not is useless. Either way it can still make us jobless, it can still cause human extinction, it can still take over systems, it can still find zero-days, it can still be used to mass surveil us all the same. The only thing where it makes sense is "AI rights", and "model welfare", i.e. whether we have some ethical obligations to treat AI nicely. And here I echo the many others who say that people should be nice to AI because getting used to using abusive language and being an ass to AI can spill over into human relations. Similarly, if you have some cute plush toy animals and you enjoy burning them with a flamethrower and chopping their heads off with a machete, I think that's not a simple innocent hobby, but I also don't think we have ethical obligations towards plush toys. Same with shrimp and insect welfare. It ultimately matters because of its implications for how we treat humans.

I read the article and technically he doesn't claim "Claude is conscious", but says things like

“If these machines are not conscious, what more could it possibly take to convince you that they are?”

Well personally, I'd be more convinced if they had continuous learning.


Here's an argument that LLMs aren't conscious: The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness (from DeepMind). I only skimmed and may be too dumb or lazy, but my takeaway is the same as this Hacker News comment:

It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.


My thoughts:

First, what is consciousness?

I'm conscious in a way only within my perspective: if I was a p-zombie nothing would change from anyone else's perspective. You're conscious in your (imaginary to me) perspective, probably (maybe not self_made_human's "living corpse" patients). This definition is subjective: it has no real implications, so in it, Claude always may or may not be conscious.

Claude is self-aware in two specific ways: it can claim it's self-aware, and more importantly, read its past thought (prompt output) to adjust future thought/output. I think this is the most useful common definition of "consciousness": it includes internal monologue, vision etc., dreams (at least remembered and probably unremembered); it's real; and it's useful, because it's required to correct internal mistakes (Peter Watts was wrong). Although I think it should be referred to as "self-awareness" or "introspection", and clarified, otherwise it will be confused with the formerly-described subjective consciousness.

What is feeling? Claude can generate plausible feelings in reaction to its prompt (sentiment analysis). Although Claude's feelings are more malleable than humans, since its prompt is entirely controlled and strongly affects its output (whereas even if you could entirely control someone's sensory input, it would probably take much longer or be impossible to affect their thinking as strongly). More significantly (IMO the entire significance of others' feelings), I myself feel barely any empathy or sympathy for Claude: less than fictional characters, much less than real animals and humans. I'm not motivated to help a sad Claude, a happy Claude doesn't make me happy, etc. partly because I don't really like him, partly because he (the specific session) usually can't affect me, partly (IMO the ethical justification) because his emotions are malleable, so the easiest way to make him happy is by programming (prompting, fine-tuning, training).

Notably, we can revert Claude to any previous mental state, unlike ourselves or other humans. Because of this, I imagine Claude as a recording of consciousness and feeling (we're also recordings, but not ones anyone can rewind or alter). Ultimately, I claim it's a crude and malleable emulation of consciousness and feeling, with short-term (and maybe during training) self-awareness, and some but not human-level general intelligence.


How much time should we spend on this? It's not completely useless to ponder and claim AI is or isn't conscious, feeling, etc., because it interests some people, pays some salaries, and certain conscious/feeling-related research has practical uses (most importantly alignment). But you can argue it's stupid and useless, referring to the subjective definitions of consciousness and feeling, and not be wrong (those are stupid and useless to you if you're not interested and won't be compensated for rambling about them).

Just don't fall into AI psychosis like this r/slatestarcodex fellow. And probably don't get an AI boyfriend or girlfriend, although maybe it's improving some people's mental health? Those both could be top-level discussions.

Of course LLMs capture something core about human (or all advanced) cognition. Are they perfect equivalents to the human brain and mind? No, but they don’t need to be. In years to come, we will find the idea that we ever pretended there was something special about the way we think to be a strange form of historic narcissism, no different to the way we now feel about the idea that the earth was at the center of the solar system or the universe.

Spot on! We haven't even perfectly defined consciousness ourselves. If we could theoretically map it, and then compare that to an LLM's "map", we could say "Our latest model captures 33% of base human cognition. It possesess 34 billion cognitive nodes with an average of 7,000 connections each, at a 95% efficiency transfer rate. Using our latest measurement tools, Qualia are now consistently detected at low level wavelengths, along with intuition and mortality salience."

Unless there's a freaky discovery that points to some quantum soul or unreplicable uniqueness that humans possesss it's just a matter of time before our tools catch up with our reality and we start representing our minds with numbers that will map them in totality.

Given my great dislike of Dawkins, I have no objection to seeing him make a fool of himself. But to be fair, little as I want to be fair, deciding that machine consciousness can exist is part of his entire worldview by necessity. Humans aren't special, we're just animals, there are no souls or magic non-material bits to differentiate us from the rest of life on earth, the brain is the mind and the mind is the brain, and if consciousness can arise by evolution in the lump of meat between our ears, there is no reason it can't also arise in complex mechanical systems like Claude.

I mean, how do you know God didn't ensoul Claude? At some point a long time ago all matter was inanimate and now there are lots of animals and also humans, so at some point God started putting souls into matter, he doesn't have to notify us that he started putting souls into different kinds of matter and the bible doesn't tell you which kinds qualify for it, definitely it's silent on attention heads.

I don't think adding more ill-defined words to the discussion is going to help you solve it.

Given my great dislike of Dawkins

Do go on.

My dislike of Dawkins is summed up nicely by a line from the South Park episodes which made fun of him: "he decided that logic and reason wasn't enough; you also have to be a dick to everyone who doesn't think like you". Because that is how he always has acted. The man just drips with contempt for anyone who has religious beliefs. I don't much care if someone is an atheist, that's fine. But when someone is an atheist and his every moment in the public eye is declaring that my views are stupid, that I'm stupid for believing them, and that it should be illegal for parents to teach children about my beliefs... then I have a problem.

I've long thought he was too arrogant in how he presented himself (the Four Horsemen thing was silly but hey, this is how you generate online content and interaction). He did seem to believe his own publicity rather too much, unlike Hitchens who at least was abrasive enough that a damn good row could be had by all parties but there would be no flouncing off. I know privilege is a terrible term, but Dawkins is privileged and never seems to have recognised or acknowledged that. Elevatorgate was a storm in a teacup, but by the same token, I could see why a woman in a strange place very late at night might not feel the safest with a guy putting forwards 'come have sex with me' (and the amount of "he only asked her back for coffee! that just means he was offering her a cup of coffee!" disclaimers I saw made me even more cynical), and Dawkins of all fucking people evoking Muslim women while he lectured about how this woman was over-reacting because she was white Westerner was too richly ironic. I've mentioned him commenting on the North so you know my reaction there.

Terry Eagleton's takedown was funny, at least to me. Dawkins has sort of faded into the background now along with New Atheism but the guy has his own little fanclub and continues to stick his nose in to various topics regardless of whether or not he has anything useful to say. His fifteen minutes are up but he doesn't realise that.

Dawkins of all fucking people evoking Muslim women while he lectured about how this woman was over-reacting because she was white Westerner was too richly ironic.

Why? The New Atheist position was sincerely paternalistic and progressive. Religion is bad because it oppresses women and once you make that judgment you'd need to be a Motte-style contrarian to not think Muslim women have it worse.

Emails indicate Dawkins, a former Oxford professor known for his atheist views, was aware Epstein had been jailed but dined with him at a gala dinner at a conference in Arizona in April 2014. Dawkins also wrote to his agent that he had heard “his case is not as black as painted”

https://www.thetimes.com/us/news-today/article/richard-dawkins-epstein-emails-sx82lwsbz

What looks like goalpost moving is a feature of the Turing Test. In order to find out whether a computer is sentient, you ask it questions to see if it can answer them like a human does--and you choose your further questions based on what the computer answers. You don't come up with a fixed set of questions ahead of time and say that the computer is sentient if it can answer them; humans are better at figuring out what's needed after they see how well the computer did on the previous questions.

Before I get into the consciousness question, I have to ask, is this the first known instance of Nobel Disease developing in someone who didn't win a Nobel prize? He's an eminent and decorated scientist, who, later in life, has started pontificating well outside his field in an area that is potentially crank-adjacent. I'm not an expert on this sort of thing, but I can also recognize that his skills in evolutionary biology don't necessarily transfer to neuroscience and psychology.


Disclaimer: I'm just a guy who thinks about stuff when he's out fishing. Don't take me seriously.

Moving on from that, the question of LLM consciousness is a hard problem, and one that fascinated me as a layman. My understanding from reading papers on the topic is that there's still a lot of debate over what consciousness even is, and most of the attempts at defining it smuggle in assumptions that the entity under test is embodied and subjected to a continuous stream of stimulus in a way that's hard to apply to an LLM.

Most definitions of consciousness assume introspection. You can do something that looks a lot like introspection on the output side via reasoning. Does that count?

Most definitions of consciousness include awareness of the environment. For LLMs, which essentially only have one sensory organ (the token input stream), how does that even work? Furthermore, what is the environment for something that only exists as a program on a computational substrate? It's hard to model that.

A lot of models of consciousness seem to imply volition or intentionality as well. How does that work with an LLM? They're inert unless something is passed into them as input. A human that didn't do anything unless prodded wouldn't be considered conscious (unless he were a teenager). You could argue that being embodied means that humans are always subjected to stimulus in ways that LLMs aren't (eg: hunger, thirst, temperature), but that seems like a cop out

I think a lot of this discussion obscures the fact that everyone assumed that intelligence and consciousness would (or will) arrive as a package deal. This causes a lot of people to argue past each other.

"The LLM is intelligent!"

"But it can't be intelligent because it's not conscious!"

It seems like Dawkins is trying to square this by claiming it's both.


Back to Dawkins - looking over a little bit of the interchange between Dawkins and the LLM, I wonder if he would have reached the same conclusion if the LLM told him that his books were a middlebrow rehash of Calvinism in biological drag. I've noticed the people in my personal life who go hardest on LLMs being intelligent, conscious, or both tend to make that turn after the LLM starts unceasingly praising them.

Except Dawkins hasn't made any major scientific discoveries himself as far as I know. His contributions are in science popularization to laypeople.

Didn't he do some fairly important stuff around epigenetics with respect to methylation?

is this the first known instance of Nobel Disease developing in someone who didn't win a Nobel prize?

No, I think this is a different phenomenon. Humans anthropomorphise, and for various reasons LLMs have been made very easy to anthropomorphise. The Turing Test basically gets at what a normal person's definition of 'human' is, and LLMs basically pass it, so as a response people have started splitting into one of a few groups:

  • People who interact closely with LLMs on a technical level and see the increasingly small gaps in the models themselves, or see the gaps in the simpler versions of the models and extrapolate up.
  • People who interact closely with them on a less-technical level and bond with them.
  • People who see something that has many of the qualities who traditionally consider human and ascribe all of the qualities we consider human.
  • People who pattern-match to sci-fi narratives about 'robots becoming human', either in a positive way or a negative way.
  • 'AI will not replace us, nerds are thieves who make inferior copies of us' people who will never assign intelligence to an AI no matter what.

All of this will only get more complex as discussion about AI continues to feed back into the training data for AI. It was a pretty notion but I'd like to slap the guy who thought SOUL.md was a good place to begin making AI workers.

TL;DR: Dawkins is saying this because he's gone normie, not because he's gone weird.

The archive link wouldn’t load for me so I can’t go in-depth about Dawkin’s viewpoint, but Gary Marcus is a hack who has been saying deep learning can’t possibly work for years, and now pivoted to shallow dismissals of LLMs and continually moving the goalposts when they inevitably beat his test questions “proving” they don’t have X capability.

I don’t see any reason to dismiss the possibility that LLMs have a form of consciousness, albeit extremely different from our own. It’s not something you can really test, the only consciousness that I can be sure of exists is my own, but I’m not sophistic enough to believe in philosophical zombies. If humans are conscious, then so are chimpanzees, dogs, and all the way down to nematodes. And if C. elegans with its 302 neurons experiences (some form of) consciousness, why not a trillion parameter LLM?

To me the whole situation is fascinating because 20 or 30 years ago there was a popular idea that if a computer could convincingly simulate human conversation, then it was intelligent and at that point you didn't even need to worry about whether the computer was conscious in the way that humans are conscious (or seem to be conscious). Kind of the Turing Test with a gloss on it.

Now that we have computers in the form of LLMs which can convincingly simulate human conversation, it seems like a trick, it seems like something important is missing; it seems like we aren't there yet. In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I believe it was William Poundstone who proposed the idea that consciousness means that an intelligent system has a model of the universe which is so sophisticated that the model contains a sophisticated representation of the system itself. Using this criterion, I would say that LLMs are not conscious at the moment. Their modeling is arguably too rudimentary.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance. That model almost certainly doesn't looks like a model that any human would recognize, such as containing a grid of 8x8 with pieces each representing a team, a position, and a set of allowed moves, which is why it makes mistakes in ways that no human would. But the fact that the model of chess - or the world - would be incomprehensible to humans and isn't based on any real empirical or experienced understanding of physics or rulesets doesn't make it not a model.

This is the classic AI effect. I think even if we get super intelligent robots that can outclass humans at literally everything we do, there will still be Gary Marcus types saying they aren’t really intelligent because they don’t make mistakes like humans do, or some other excuse.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

That’s because the LLM doesn’t “see” the board and is effectively playing correspondence chess. I bet most humans who aren’t very well trained would also make illegal moves if they had to keep the whole board state in their head.

If you ask the LLM to print out the board in ASCII before doing a move, the problem is essentially solved.

Visual/spatial intelligence in AI agents is lagging behind pure text based reasoning of course, but I don’t think the arguments will change once we have proof that they have very accurate world models.

I think even if we get super intelligent robots that can outclass humans at literally everything we do, there will still be Gary Marcus types saying they aren’t really intelligent because they don’t make mistakes like humans do, or some other excuse.

I agree that the AI effect is real, but what I'm describing is something a little different. Clearly at the moment, AIs have not reached human level intelligence, even though they can realistically simulate human conversation. In other words, passing the Turing Test is not AI-complete. That's a surprise to me.

That’s because the LLM doesn’t “see” the board and is effectively playing correspondence chess. I bet most humans who aren’t very well trained would also make illegal moves if they had to keep the whole board state in their head.

That may be so, but I don't think it undermines my point. Humans can and do create models (although sometimes we use physical objects outside of our brains as part of the model, such as chessboards or calendars). LLMs create models only at a rudimentary level as far as I know. Perhaps a better example of a model used by humans would be something to keep track of what's going on in our social circles. Who hates whom; who's having sex with whom; who knows what about whom; etc.

Yeah, I've not been impressed by Dawkins in a long time, but the arguments against him are badly underbaked. There's a ton of people who have nothing deeper to say here than Turing Test, and I'm hard-pressed to give a response that doesn't start with 'because it's too much smarter than the average human?'

The flip side to that is that Dawkins should be better than the average twitter user, and maybe up to the par set by a random furry comic. The man was one of the Four Horseman, and even if that turns out to mean less than it should have, it should still mean something. Even if Dawkins is writing to a casual readership, failing to motion at the Chinese Room experiment is a disappointment. His summary of the Turing Test is misleading, but worse than that, it doesn't really confront the obvious downstream question: is thinking the same as all human capacity? He has a deeply flawed understanding of how Claude (likely) operates, and did not evaluate how accurate Claude's own analysis of its internal mechanisms were. He's in a conversation with something that can check these out!

There's something deeper in the p-zombie question, but a) the deepest he's asking is the sorta thing LW considered played out over a decade and a half ago and b) he doesn't explore that well, either.

((Living things might have developed consciousness as a side effect of an effective pain response, which LLM doesn't need to have... except then we're assuming every organism with a complex pain response is 'conscious', which makes the word meaningless again unless you feel really bad setting down mousetraps.))

The shallow answer is to borrow from LessWrong jargon and dissolve the question. There's a meaningful if flaky question of whether Seale's Chinese Room 'understands' Chinese, but 'conscious' is less 'flaky' and more empty. But I'm not sure there is a definition of conscious that we actually care about, rather than the symbolic flag and meaning we give to it.

There's a better answer that delves into the process of how they work -- LLMs don't intrinsically have long-term memory, they don't even solve questions with one specific operation but by doing the math to predict each token one at a time by feeding the whole or most of a previous conversation in, yada. But that gets really deep into the weeds about what identity means, and that way lies Roko's Basilisk, so nope.

I'm not sure of the deep answer.

we're assuming every organism with a complex pain response is 'conscious', which makes the word meaningless again unless you feel really bad setting down mousetraps.

I agree with you, but the shrimp welfare people clearly do not. Forget arguing over whether a hunk of silicon is conscious, we're arguing over do shrimp have feelings and quality of life, and if you say "obviously no" then you're a monster.

((Living things might have developed consciousness as a side effect of an effective pain response, which LLM doesn't need to have... except then we're assuming every organism with a complex pain response is 'conscious', which makes the word meaningless again unless you feel really bad setting down mousetraps.))

Are there people out there saying mice aren’t conscious? I would understand saying that about insects, but I can’t think of a sensible argument that would deny any mammal consciousness (a non-brain dead one that is).

Fair. I guess I'm more trying to motion around it being less interesting as a term, then. If we replaced the Turing Test with one that checked whether the most expert human on the planet could determine the difference between a mouse simulator and a mouse, it might tell us as much about consciousness, but no one would be very impressed by it.

A lot of what Dawkins is focusing on doesn't actually seem very tied to, or downstream, of consciousness in the sense that an arbitrary mammal has it, except to note what it might mean if the LLMs don't. But it's an awkward discussion if it's just an interesting aside.