site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Okay it's Sunday so I'm going tp try my hand at a low-stakes OP. Apparently Richard Dawkins thinks Claude is conscious. The reaction seems to universally be that he's a dumb old boomer making a fool of himself and I guess that's true. I'm not prepared to come to his defense on it.

Still, I can't help noticing that we totally have what most people would have cheerfully considered "sentient computers" in a sci-fi movie at any point before they were actually invented. Don't get me wrong, I understand that the reality of AI technology has turned out differently than what a lot of people expected. I understand its limitations, and I recognize that the apparent goalpost-moving isn't necessarily cynical. But boy those goalposts sure have been flying down the fucking field ever since this stopped being hypothetical and infinite money hit the table.

As a layman, I just want to put it out there: Anti AI consciousness people, you haven't lost me, but I wish you were making better arguments. Every time I hear about qualia my eyes start to glaze over. Unfalsifiable philosophical constructs and arbitrary opinion on where they might "exist" are not the kind of reassurance I'm looking for when machines are getting this convincing.

This seems to be the main piece of criticism floating around out there about Dawkins on this subject, and I find it kind of shit.

But even more importantly, consciousness is not about what a creature says, but how it feels. And there is no reason to think that Claude feels anything at all.

This seems to be all the author has to say on the actual subject. "Just trust me bro, I'm the feelings detector and I say no." Garbage. Come on guys, think ahead. Right now it's still mostly a boring tool, but they're just going to get smaller, and cheaper, and put into robots, and put into peoples houses. You need to have more than this in terms of argument, and it needs to be comprehensible to normal people, or sooner or later the right toy is going to come down the pipe and one-shot society. Dawkins might be a dumb old boomer, but if you lose everyone dumber than him the game is beyond over.

To me the whole situation is fascinating because 20 or 30 years ago there was a popular idea that if a computer could convincingly simulate human conversation, then it was intelligent and at that point you didn't even need to worry about whether the computer was conscious in the way that humans are conscious (or seem to be conscious). Kind of the Turing Test with a gloss on it.

Now that we have computers in the form of LLMs which can convincingly simulate human conversation, it seems like a trick, it seems like something important is missing; it seems like we aren't there yet. In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I believe it was William Poundstone who proposed the idea that consciousness means that an intelligent system has a model of the universe which is so sophisticated that the model contains a sophisticated representation of the system itself. Using this criterion, I would say that LLMs are not conscious at the moment. Their modeling is arguably too rudimentary.

In another thread, I echoed the idea that LLMs don't model the universe. So for example, if you play chess with an LLM, there's no model of a chessboard in the system, which is why it sometimes makes illegal moves.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance. That model almost certainly doesn't looks like a model that any human would recognize, such as containing a grid of 8x8 with pieces each representing a team, a position, and a set of allowed moves, which is why it makes mistakes in ways that no human would. But the fact that the model of chess - or the world - would be incomprehensible to humans and isn't based on any real empirical or experienced understanding of physics or rulesets doesn't make it not a model.

I've seen this kind of notion argued in many different contexts, and I don't understand what's the disconnect. Because OF COURSE the LLM has an internal model of the chessboard in the system; that's the only reason it could possibly make moves that are correct at a rate better than chance

I disagree, another possible reason is that simply makes a good (but imperfect) guess as to what's likely to be the next move after a sequence of moves, based on all the chess games stored in its database.

So for example, if the LLM is playing black and you open e4, it's pretty likely that the LLM will respond e5 or c5 for basically the same reason it would likely output "lamb" after "Mary had a little" and "California" after "The Golden Gate bridge is located in"

based on all the chess games stored in its database.

It doesn't have a "database," this is a fundamental misunderstanding of what's going on under the hood. With LLMs solving open math problems, I'm puzzled that the discourse remains around "it's just doing what it's seen before" with various levels of unsound understanding.

LLMs can reproduce 96% of the text of Harry Potter verbatim. Even if they do not store all their training data with perfect fidelity, their underlying operations are such that it doesn't matter. It's data compression with variable loss depending on what they were trained on. When 1:1 outputs from their memories of training data can't exist, they reach for similar patterns and smooth over the disjunctions using sophistry. They must be commended for semantic fluency.

What is this supposed to prove? There are people who have memorized the Torah or the Quran. It's still not the case that they are merely doing some kind of database lookup when you ask them about a verse, and that implies a fidelity that simply doesn't exist. And if you concede that there isn't perfect fidelity, one wonders what the purpose of discussing "database lookups" in the context of LLM inference is other than rhetoric.

When 1:1 outputs from their memories of training data can't exist, they reach for similar patterns and smooth over the disjunctions using sophistry.

Dismissing as mere sophistry novel LLM-discovered software exploits and math theorems is absurd.

It goes towards proving the basis for what we observe: that LLMs are very good at recalling large and disparate amounts of knowledge but are poor for functionally utilizing said knowledge, especially in matters complex, unusual, or otherwise not 1:1 with stuff from their training material. Whether this proves or disproves they are sentient or intelligent or whatever is a matter of semantics, but what it does do is give us a clue as to why we observe certain disparities in their capabilities, and can help inform our expectations about what further capabilities might emerge.

Humans lean on theory, trained pattern spotting, and various heuristics or memorized devices (i.e. king opposition) when playing chess. Memory plays a role to, but outside of maybe Magnus Carlson it is dwarfed by the capacities of LLMs. This is a level of intelligence that can also be employed for creating architecture or symphonies. LLMs lean a lot harder on brute memory recall (although I won't discount entirely their capacity for higher-tier reasoning) through hyper-intensive statistical calculations, and these make it very good for things like discoursing on a broad variety of facts or semantically juggling abstractions, but they do not, apparently, allow LLMs to create complex architecture, symphonies, or do anything else involving the complex interlocking of smaller elements.

The small elements are found in its memory and can be expurgated intact individually, but the LLMs do not possess the intelligence to complexly fit them together. The LLMs do not operate at a level of intelligence that would allow that. They are hyper-intensive exploiters of lower order processes but not high tier ones. That's what's suggested by the fact they can recall 96% of a novel. That they lean on highly scaled relatively brutish methods to repeat stuff verbatim, or close enough.

poor for functionally utilizing said knowledge, especially in matters complex, unusual, or otherwise not 1:1 with stuff from their training material.

Like I said:

Dismissing as mere sophistry novel LLM-discovered software exploits and math theorems is absurd.

"LLMs haven't written a beautiful symphony or designed a beautiful building" is simply moving the goal posts. There's no reason that those are the true test of putting things together and theorems and exploits don't count.

I take the ‘opposite’ view that LLMs are becoming extraordinary intelligences, but I also think the distinction between memory, recall, training set, database etc is unnecessarily importing computer science distinctions into what is a relatively robust colloquial understanding of these models.

If you watch three thousand chess games and then play a chess game and see a move and think “I’ve seen this before, I’m going to do x” and you’re right but you can’t perfectly recall that it was actually a YouTube video of a 2003 Chess regional championship quarter final between… then are you recalling or remembering or did you learn?

This is just not a relevant distinction when it comes to the human concept of memory. I’ll keep pushing this because “actually, an LLM doesn’t have memory of the training set” isn’t really true. It often does have recall of the training set, just like often you really might be able to remember the book you first saw an unusual turn of phrase in or the chess game where you first saw a particular move. And in any case, memory encompasses both that and a relational, situational, partial and often metadata-free recall but it still counts.

The counterargument here isn’t “no LLMs don’t do this”, it’s “so do you”.

relatively robust colloquial understanding of these models.

This doesn't exist, at least on this forum on down. There's at least one person I talked to who really thought that LLMs were looking through the training data at inference time. It turns out that people using sloppy language ""colloquially"" ("joke's on you, I was only pretending to misunderstand LLMs") can cause people to believe the literal meaning if they don't know any better.

This is just not a relevant distinction when it comes to the human concept of memory.

Agreed.

I’ll keep pushing this because “actually, an LLM doesn’t have memory of the training set” isn’t really true.

This isn't what I said. I said it doesn't have access to the training set, in the same way that if you take an exam without "access" to the textbook you're not allowed to bring it in and leaf through it when answering the problems. It doesn't preclude you from reading the textbook a thousand times and memorizing it verbatim though.

More comments