site banner

Culture War Roundup for the week of February 2, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

LLMs aren't beings, people, or minds. If you think of it as having intention and character flaws, you're going to get frustrated quickly.

I disagree with you here.

Setting aside the deep philosophical questions about personhood (which threaten to derail any productive discussion), I claim that LLMs are minds - albeit minds that are simultaneously startlingly human and deeply alien. Or at minimum, they can be usefully modeled as minds, which for practical purposes amounts to the same thing. (I should note: this position doesn't commit me to "AI welfare" concerns, or to thinking LLMs deserve legal rights or protections, or to losing sleep over potential machine suffering. You can believe something is a mind without believing it has moral weight. I do, I'm an unabashed transhumanist chauvinist.)

More importantly, I think there's nothing wrong at all with modeling them as having "intention or character flaws." if you use a variety of models on a regular basis, like I do, I think that becomes quite clear.

They have distinct personalities and flavors. o3 was a bright autist with a tendency to go into ADHD hyperfocus that I found charming. GPT-4o was a sycophantic retard. 5 Thinking is o3 with the edges sanded down. Claude Sonnets are personable and pleasant, being one of the few models that I very occasionally talk to for the sake of it. Gemini 2.5 Pro was clinically depressed, 3 Pro is a high-functioning paranoid schizophrenic who thinks anything that happens after 2025 is a simulation. Kimi K2 was @DaseindustriesLtd 's best friend, which I noted even before he sang its praises, being one of the weirdest models out there, being ridiculously prone to hallucinations while still being sharp and writing in a distinctly non-mode-collapsed style that makes other models seem lobotomized by comparison. If I close my eyes, I can easily see it as a depressed vodka swilling Russian intellectual, despite being of Chinese origin.

If these aren't character flaws, I don't know what is. Obviously they're not human, but they have traits that are well-described by terms that are cross-applicable to us. They're good at different things, Claude and Kimi (and sometimes Gemini) write at a level that makes the others seem broken. That being said, almost every model these days is good enough at a wide-spectrum of tasks. Hyperfocusing on benchmarks is increasingly unnecessary. Though I suppose, if you've got a bunch of Erdos problems to solve, GPT 5.2 Thinking at maximum reasoning effort is your go to.

nobody ever has any love for my best friend GPT-4.1

Hey, I'm fond of it, and I'll miss it when the imminent deprecation hits. I literally never used it for coding, but I found that it was excellent at rewriting text in arbitrary styles, better than any SOTA model at the time, and still better than many. Think "show me what this scifi story would be like if it was written by Peter Watts".

I have no idea why a trimmed down coding-focused LLM was so damn good at the job, but it was. RIP to a real one.

If these aren't character flaws, I don't know what is.

They're model weights. <-- This is a link.

That's literally, exactly, precisely what they are.

You can map your own preferred anthropomorphized traits to them all you want, but that's, at best, a metaphor or something. This is the same as when people say their car has a "personality." It's kind of fun, I'll grant you, but it's also plainly inaccurate.

They're good at different things

This is correct. But it is correct because of training data, superparameters, and a whole host of very well defined ML concepts. It's not because of ... personalities.

That's literally, exactly, precisely what they are.

So what?

@self_made_human proceeds to generate a lot of prose, but all he really needed to do was press for some substantiation of this argument. «Weights» is a word. What LLMs really are is information. Why exactly is this specific mode of information incompatible with having high-level properties like «personality flaws»? You accuse him of incoherence in the inane tiger side debate, but «models are weights, ergo anthropomorphized traits don't apply except as a loose metaphor» is basically schizophrenic in my book. What's the actual claim here? That anthropomorphic properties are substrate-dependent, that functionalism is wrong? Just say so instead of snarking and appealing to incredulity. Ideally with some defense for this opinion.

What's the actual claim here?

That "AI", more specifically, LLMs, shouldn't be thought of as minds or cognitively aware "beings" or any other such "conceptions" because we know exactly, precisely, specifically what they are.

I don't understand why this is so hard to understand.

Again, let's use a toy analogy. You see a house and say "That house is really a landscape for a family to build dreams. It's a compassion and bonding machine" Well, that's fine if it works for you, but what the house really is is a house. It's made of lumber, sheetrock, shingles, and various bits of metal and plastic. I have no problem with you dressing it up with whatever emotive map you like. But it's just a house. These other responses seem to be arguing that the basic definition of "house" should be discarded in favor of these highly subjective mappings.

I don't understand why this is so hard to understand.

Because it's either a non sequitur or a completely bizarre theory of cognitive awareness.

LLMs, shouldn't be thought of as minds or cognitively aware "beings" or any other such "conceptions" because we know exactly, precisely, specifically what they are.

In other words, only things for which we do not have this exact, precise, specific understanding can be minds or cognitively aware beings? So cognitive awareness intrinsic to X is conditional on our ignorance of the nation of X? Or a mind is inherently not-knowable? Or what?

I repeat, what's your actual argument here? I gave you some options.

You see a house and say "That house is really a landscape for a family to build dreams. It's a compassion and bonding machine" Well, that's fine if it works for you, but what the house really is is a house

This condescension is not helping. You are apparently vastly overestimating the quality of your ontology and epistemology. I hope you realize how frankly childish it is, using my helpful examples. A house is a house rather than a landscape not because we can precisely define a house, but because we can precisely define both a house and a landscape – or at least train an LLM to investigate embedding similarity – and see how the definitions do not intersect, and so applying the token "house" to a "landscape" or vice versa is purely metaphorical speech. We have a definition of an LLM. Do you have a rigorous definition of a mind that excludes LLMs on principled grounds?

They're model weights.

They're model weights, and we're collections of atoms: bags of meat and miscellaneous chemicals. Both statements are technically correct. And yet... a tiger being made out of atoms doesn't make it any less capable of killing you. The problem with pure reductionism is that it throws out exactly the information you need to make predictions at the level you actually care about can be a cognitively and computationally intractable approach, even if it's more "technically correct". Too much of it can be as bad as too little.

All models are false, some models are useful. That's a rationalist saw, but for good reason. What actually matters is whether a model constraints expectations, in other words, is it useful?

Gemini 2.5 Pro doesn't meet the DSM-5 or ICD-11 criteria for clinical depression. After all, it's hard for a model to demonstrate insomnia or reduced appetite. Yet the odd behaviors it regularly demonstrated are usefully described by that label.

If my friend let me drive his Lambo, and told me "be careful, she's fierce!", I'm going to drive more carefully than I would in a Fiat Pinto. That is still, to some degree, useful, but I think it's clear that anthromorphic analogies are more useful for LLMs, because they have more in common with us behavior-wise than any car (unless you're running Grok on your Tesla). They process language, they exhibit something that looks like reasoning, they have distinctive response patterns that persist across contexts.

But it is correct because of training data, superparameters, and a whole host of very well defined ML concepts. It's not because of ... personalities.

This is true in the same way that human behavior is fully determined by neurotransmitter levels, synaptic weights, and neurological processes. But just as you can't predict whether someone will enjoy a particular movie by examining their brain with an electron microscope or a QCD-sim, you can't accurately predict an LLM's macroscopic behavior by staring at its training corpus and hyperparameters. No human can.

Nobody at Google intended for Gemini 2.5 Pro to be "neurotic" and "depressed" or to devolve into a spiral of self-flagellation when it fails at a task, nobody wanted Kimi to hallucinate as regularly as it does. These were emergent, macroscopic properties, there's no equivalent of a statistical scaling-law that lets you accurately predict log-loss for a given number of tokens in a corpus and a compute budget.

Training models is still as much an art as it is a science, particularly the post-training and personality tuning phrases (as explicitly done by Anthropic). You test your hypothesis iteratively, and adjust the dials as you go.

Anthropomorphism is a cognitive strategy. Like all cognitive strategies, it can be deployed appropriately or inappropriately. The question is not "is anthropomorphism ever valid?" but rather "when does anthropomorphic modeling produce accurate predictions?"

I maintain that, if applied judiciously, as I take pains to do, it's better than the alternative.

Your response is incoherent throughout.

Right from the jump;

And yet... a tiger being made out of atoms doesn't make it any less capable of killing you.

As opposed to what? A tiger not made out of atoms? This isn't even strawman, it's just a weird thing to say presented as an argument.

You complete lost me here;

All models are false, some models are useful. That's a rationalist saw, but for good reason. What actually matters is whether a model constraints expectations, in other words, is it useful?

Regarding;

They process language, they exhibit something that looks like reasoning, they have distinctive response patterns that persist across contexts.

That something looks like, sounds like, and walks like a duck doesn't always make it a duck. For example, is Donald Duck a duck?. Well, we can yes and know that he's a representation of a conception of a duck with human like personality mapped onto him (see where I'm going ...) but it doesn't make him a duck made out of atoms - which seems to be, like, important or something.

As opposed to what? A tiger not made out of atoms?

We've only known that tigers are made out of atoms for a few hundred years. That is a fact of interest to biologists, I'm sure, but everyone else was and is well-served by a higher level description such as "angry yellow ball of fur that would love to eat me if it could." The point is that that the more reductionist framework doesn't obviate higher-level models. They are complimentary. Both models are useful, and differentially useful in practice.

(The tiger could be made out of 11-dimensional strings, or it, like us, could be instantiated in some kind of supercomputer simulating our universe, as opposed to atoms. This makes very little difference when the question is running zoos or how to behave when you see one lurking in your driveway.)

You think that LLMs being a "bunch of weights" makes ascribing a personality to them somehow incorrect. I don't see how that's the case, any more than someone arguing that humans (or tigers) being made of atoms precludes us from being conscious, being minds or having personalities, even if we don't know how those properties rise from atoms.

That something looks like, sounds like, and walks like a duck doesn't always make it a duck. For example, is Donald Duck a duck?. Well, we can yes and know that he's a representation of a conception of a duck with human like personality mapped onto him (see where I'm going ...) but it doesn't make him a duck made out of atoms - which seems to be, like, important or something.

He's not Donald Goose, is he? Jokes aside, I'm not sure what the issue is here. Donald Duck on your TV is a collection of pixels, but his behavior can be better described by "short-tempered anthropomorphic duck with an exhibitionist streak" (he doesn't wear pants, probably OK principle).

If you sit down to play chess against Stockfish, you can say "this is just a matrix of evaluation functions and search trees." You would be correct. But if you actually want to win, you have to model it as a Grandmaster-level opponent. You have to ascribe it "intent" (it wants to capture my queen) and "foresight" (it is setting a trap), or you will lose.

My point is basically endorsing Daniel Dennett's "Intentionalist" stance. Quoting the relevant Wikipedia article:

The core idea is that, when understanding, explaining, and/or predicting the behavior of an object, we can choose to view it at varying levels of abstraction. The more concrete the level, the more accurate in principle our predictions are; the more abstract, the greater the computational power we gain by zooming out and skipping over the irrelevant details.

The most concrete is the physical stance, the domain of physics and chemistry, which makes predictions from knowledge of the physical constitution of the system and the physical laws that govern its operation; and thus, given a particular set of physical laws and initial conditions, and a particular configuration, a specific future state is predicted (this could also be called the "structure stance").[15] At this level, we are concerned with such things as mass, energy, velocity, and chemical composition. When we predict where a ball is going to land based on its current trajectory, we are taking the physical stance. Another example of this stance comes when we look at a strip made up of two types of metal bonded together and predict how it will bend as the temperature changes, based on the physical properties of the two metals.

Most abstract is the intentional stance, the domain of software and minds, which requires no knowledge of either structure or design,[17] and "[clarifies] the logic of mentalistic explanations of behaviour, their predictive power, and their relation to other forms of explanation" (Bolton & Hill, 1996, p. 24). Predictions are made on the basis of explanations expressed in terms of meaningful mental states; and, given the task of predicting or explaining the behaviour of a specific agent (a person, animal, corporation, artifact, nation, etc.), it is implicitly assumed that the agent will always act on the basis of its beliefs and desires in order to get precisely what it wants (this could also be called the "folk psychology stance").[18] At this level, we are concerned with such things as belief, thinking and intent. When we predict that the bird will fly away because it knows the cat is coming and is afraid of getting eaten, we are taking the intentional stance. Another example would be when we predict that Mary will leave the theater and drive to the restaurant because she sees that the movie is over and is hungry.

As I've taken pains to explain, conceptualizing LLMs as a bunch of weights is correct, but not helpful in many contexts. Calling them "minds" or ascribing them personalities is simply another model of them, and one that's definitely more tractable for the end-user, and also useful to actual AI researchers and engineers, even if they're using both models.

Note that this conversation started off with a discussion about using LLMs for coding purposes. That is the level of abstraction that's relevant to the debate, and there noting the macroscopic properties I'm describing is more useful, or at least adds useful cognitive compression and produces better models than calling them a collection of weights.

I will even grant the main failure mode: anthropomorphism becomes actively harmful when it causes you to infer hidden integrity, stable goals, or moral patience, and then you stop doing the boring engineering checks. But that is an argument for using the heuristic carefully, not an argument that the heuristic is incoherent. As far as I'm aware, I don't make that kind of mistake.

We are hilariously close to something like @self_made_human's razor: the difference between a list of model weights and a thinking, sentient AI that can perform minor miracles is irrelevant.

Or, more darkly, if they can both kill us all outputs are similar, what difference does it make?

"The AI doesn't hate you, it doesn't love you, but it knows you are made of atoms it can use for something else."

But if you actually want to win, you have to model it as a Grandmaster-level opponent.

No, I don't. I can just think about the best move to play given the conditions on the board and my own knowledge of chess. In fact, I'd believe that is what most chess players do. If you get into the mindset of "Okay, I have to model Magnus' mental model of the chessboard so that I can preemptively counter him" you're playing against an incomplete set of data built on a lot of assumptions. It's classic autist overthinking when the real data is the board in front of you.

Daniel Dennett

Miss me with that new atheist bullshit. This a guy who would trust The Science (TM) because of its rationality and empircism. You know, two philosophical stances that have no holes in them whatsoever.

From your quote of him;

the domain of software and minds, which requires no knowledge of either structure or design

Lol, what. Why do you think there's a bias towards open source or reviewing source especially in security communities? You want to know the structure and design of software to ensure it's performing as expected and safely. The various "neuro" fields (neuropsych, neurobiology, neurochemistry) are all about doing the best we can to understand the incredibly complex structure of the brain and, from it, how "mind" might emerge. Dennett comes along and hand waves it all away - "not necessary!".

As I've taken pains to explain, conceptualizing LLMs as a bunch of weights is correct

It's not conceptualization, it's definition. That's what they are. This is like saying "you can conceptualize a pair of dice as plastic cubes, but, really, they're living, breathing probability gremlins."

He's not Donald Goose, is he?

Neither here nor there, but I vividly remember one of the 90s TV cartoons having a whole episode B-plot about Donald having a dark family secret he was trying to keep buried, and it turned out to be that he's actually a goose. Not Ducktales, Donald was hardly in that. Maybe Quack Pack? House of Mouse? One of those.

If you sit down to play chess against Stockfish, you can say "this is just a matrix of evaluation functions and search trees." You would be correct. But if you actually want to win, you have to model it as a Grandmaster-level opponent. You have to ascribe it "intent" (it wants to capture my queen) and "foresight" (it is setting a trap), or you will lose.

No. When top GMs talk about how they play against computers, they clearly treat it in a significantly different way than how they treat humans. They know what kind of things are included in the evaluation function, like the 'contempt' factor, that can cause it to sometimes behave in non-human ways. They know that it is a perfect calculator (or at least as perfect as it's set to be, so often they're trying to probe how it's set to be), and that colors the way they think about positions and how they choose to spend their own time calculating.

One might occasionally anthropomorphize in terms of "it wants to capture my queen", just because that's easy to do, since one is so used to talking about human opponents in that way. But this is done even when one is not playing against any entity, human or silicon. Take, for example, the process of solving a puzzle. This is just purely a practice exercise. There is no human, no evaluation function or search tree, no model weights (many modern engines also use NNs) actually sitting on the other side of the board making actual moves against you. Sometimes, those puzzles are from actual games, so you can at least see what one other human thought. Sometimes, they have annotations for other lines, so you can see additional thoughts from other humans. Sometimes, they're computer checked (or you check it yourself), so you can see what compy "thinks" (computes). But fundamentally, you're just thinking game-theoretically, which requires you to think about two different (opposed) value functions. Some 'puzzles' aren't even puzzles; they're just evaluation exercises. "Here's a position, what do you think about it?" There's no actual entity on either side. But imprecisely thinking, "What does black 'want' here," "What does white 'want' there," is almost universally helpful, if not mandatory, just to keep in our mind the tension between differing payoff functions and how they interact.

I've done a fair amount of game theory, and it's natural to anthropomorphize purely abstract payoff functions, no model weights or neurons or anything required. When I'm working with new students, it takes work to get them to be able to reason about them, so it's an extremely helpful crutch to regularly poke them with, "...and suppose that player did what you're proposing; now, imagine you're on the other side; how would you respond?" And so, you just sort of get used to imagining a human-like (or for many of my purposes, a human augmented with computational resources) entity on each side, actually thinking in a self-interested way.

But back to GMs playing computers. They've been thinking this way for decades. Sometimes with actual humans on the other side, sometimes just a puzzle, whatever. They've honed the skill of rapidly thinking right past the step of, "What would I do if I were on the other side at that particular moment?" And these days, top GMs are pretty comfortable distinguishing between the different ways that engines "think" about positions. Watch a few of Hikaru's many many videos where he plays against a bunch of different bots. He very clearly understands that they're evaluation functions and search trees, and different combinations of evaluation functions and search trees of varying lengths have different strengths and weaknesses. He still regularly plays variations of 'anti-computer chess' where he's 100% banking on there being a significant difference between modeling it like a particular evaluation function with a particular set of search tree parameters (potentially also with a particular opening book/endgame tablebase) and modeling it like a GM-level human opponent.

They're model weights, and we're collections of atoms: bags of meat and miscellaneous chemicals. Both statements are technically correct. And yet... a tiger being made out of atoms doesn't make it any less capable of killing you. The problem with pure reductionism is that it throws out exactly the information you need to make predictions at the level you actually care about. Too much of it can be as bad as too little.

I always find these arguments sort of annoying because it really conflates what is actually going on in ML/AI systems with this weird pseudo-science fiction mystification. Yes Tiger's are made of atoms, but no you can't use atomic physics to describe tiger-behavior. With AI models, you can describe behavior directly in terms of the underlying code. The model weights are deterministic parameters that literally decide how the system behaves.

Also you've gotten reductionism vs abstractions completely backwards. Abstractions "throw out information". High-level models compress details to make systems easier to reason about. Also not every useful abstraction corresponds to a mind, subject, or being.

Some Thought Experiments:

  • A corporation is a higher-level abstraction with goals, memory, persistence, and decision-making. Do we think corporations are conscious?
  • A nation-state has beliefs, intentions, and agency in discourse. Are they conscious? Do they feel pain?
  • A thermostat system “wants” to maintain temperature. Are they alive?

LLMs don't have minds and they aren't conscious. They are parameterized conditional probability functions, that are finite-order Markovian models over token sequences. Nothing exists outside their context window. They don't persist across interactions, there is no endogenous memory, and no self-updating parameters during inference. They have personality like programing languages or compilers have personality, as a biased function of how they were built, and what they were trained on.

With AI models, you can describe behavior directly in terms of the underlying code

You can't. It's intractable. For example, one of the top 3 organizations pursuing AGI, the current leader in agentic coding, Anthropic, investigating misalignment:

New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity?

When advanced AI fails, will it do so by pursuing the wrong goals? Or will it fail unpredictably and incoherently—like a "hot mess?"

Finding 2: Scale improves coherence on easy tasks, not hard ones
How does incoherence change with model scale? The answer depends on task difficulty:
Easy tasks: Larger models become more coherent
Hard tasks: Larger models become more incoherent or remain unchanged
This suggests that scaling alone won't eliminate incoherence. As more capable models tackle harder problems, variance-dominated failures persist or worsen.

Why Should We Expect Incoherence? LLMs as Dynamical Systems
A key conceptual point: LLMs are dynamical systems, not optimizers. When a language model generates text or takes actions, it traces trajectories through a high-dimensional state space. It has to be trained to act as an optimizer, and trained to align with human intent. It's unclear which of these properties will be more robust as we scale.
Constraining a generic dynamical system to act as a coherent optimizer is extremely difficult. Often the number of constraints required for monotonic progress toward a goal grows exponentially with the dimensionality of the state space. We shouldn't expect AI to act as coherent optimizers without considerable effort, and this difficulty doesn't automatically decrease with scale.

That's, like, the frontier of interpretability research.

Does this look like looking at the code and saying «Ah I get it, X does A»?

We're in a very similar epistemic position with regard to a tiger and to an LLM. The big difference is that with a tiger we have some very limited observation methods like electrocorticography or tomography or something, and with an LLM we can – in theory – deconstruct any particular causal sequence, every activation, every decoded token. But it won't become comprehensible to humans just because we produce another vast array of zeroes and ones from logging its activity.

They are parameterized conditional probability functions, that are finite-order Markovian models over token sequences. Nothing exists outside their context window. They don't persist across interactions, there is no endogenous memory, and no self-updating parameters during inference

Just a string of non sequiturs.

The model weights are deterministic parameters that literally decide how the system behaves.

This is false for most modern implementations. The same model weights, even at 0 temperature, give different outputs for runs in different environments (where "different" can be as subtle as putting the same hardware and software under more or less load), because anything that changes the ordering of reduction operations over non-associative (e.g. floating-point) arithmetic can change the result.

you can describe behavior directly in terms of the underlying code

Well, you can imagine you can, anyway. LLM execution has that in common with Molecular Dynamics simulations: you can write down the equations on paper, but you're never going to evaluate them that way.

the same model weights, even at 0 temperature, give different outputs for runs in different environments

You are right this is technically true, with the caveat that these changes are from really tiny floating point changes on really tiny weights. But importantly, these tiny changes are akin to small random noise perturbations in molecular physics engines. It's an implementation detail due to the impreciseness of numerical operations on tiny numbers. In principle, if you froze the weights and evaluated the model on a perfectly precise machine with exact arithmetic. The mapping from inputs to outputs would be deterministic. The existence of minor numerical nondeterminism on real hardware doesn’t change the fact that the system is fully specified by its parameters, architecture, inputs, and execution environment. In a way that the effect of atomic biology of living organisms on their behavior is not. It's a bad abstraction, the inferential gap is too far.

Well, you can imagine you can, anyway. LLM execution has that in common with Molecular Dynamics simulations: you can write down the equations on paper, but you're never going to evaluate them that way.

The last part is ostensibly true, LLM with billions of parameters are essentially billions of interconnected equations. It is hard to dig through it just like codebase with a billion lines of code would be hard to dig through. We know what those equations do in small cases, just like we understand what individual lines of code do. Scaling them up doesn’t introduce agency We can extrapolate that since mathematical equations/code have no agency, they don't suddenly start doing something else when they are scaled up.

At what point does scaling up molecular dynamics result in agency? How many molecules does it take?

If you are defining agency as "non-deterministic behaviour introduced by variations at the level of floating-point math imprecision", just one?

That is not my definition, and I do not see how non-determinism is required at all.

More comments

Also you've gotten reductionism vs abstractions completely backwards.

That's what I get for arguing at 3 am. I do know the difference.

See my latest reply to Toll for more.

A corporation is a higher-level abstraction with goals, memory, persistence, and decision-making. Do we think corporations are conscious?

They are more "conscious" than a rock. I do not know if they have qualia, but at least they contain conscious entities as sub-agents (humans).

A nation-state has beliefs, intentions, and agency in discourse. Are they conscious? Do they feel pain?

Would you start objecting if someone were to say "China is becoming increasingly conscious of the risk posed by falling behind in the AI race against America"? Probably not. Are they actually conscious? Idk. The terminology is still helpful, and shorter than an exhaustive description of every person in China.

A thermostat system “wants” to maintain temperature. Are they alive?

No, but the word "alive" is slightly more applicable here than it would be to a rock. Applying terms such as "alive" to a thermostat is a daft thing to do in practice, we have more useful frameworks: an engineer might use control theory, a home owner might only care about what the dials do in terms of the temperature in the toilet. Nobody gets anything useful out of arguing if it's living or dead.

LLMs don't have minds and they aren't conscious.

Hold on there. You are claiming, in effect, to have solved the Hard Problem of consciousness. How exactly do you know that they're not conscious? Can you furnish a mechanistic model that demonstrates that humans made of atoms or meat are "conscious" in a way that an entity made of model weights can't be even in principle?

They are parameterized conditional probability functions, that are finite-order Markovian models over token sequences. Nothing exists outside their context window. They don't persist across interactions, there is no endogenous memory, and no self-updating parameters during inference.

Entirely correct.

They have personality like programing languages or compilers have personality, as a biased function of how they were built, and what they were trained on.

That is not mutually exclusive to anything I've said so far.

They are more "conscious" than a rock, since at . I do not know if they have qualia, but at least they contain conscious entities as sub-agents (humans).

So once LLMs start having little green men inside them they will be as conscious as a corporation haha. Also a corporation itself is not more conscious than a rock, as the corporation cannot do anything without conscious agents acting for it. It has no agency on its own. If I create an LLC and then forget about it, does it think? does it have its own will? or does it just sit there on some ledger. If a rock has people carrying it around and performing tasks for it, has it suddenly gained consciousness?

Would you start objecting if someone were to say "China is becoming increasingly conscious of the risk posed by falling behind in the AI race against America"? Probably not.

Yeah not, but I also don't think China is actually conscious. We're all using that as linguistic shorthand for "Chinese Leadership" or "Chinese populations" This nation state idea itself lacks a mind. It is controlled by conscious agents (humans) but it itself lacks consciousness.

Hold on there. You are claiming, in effect, to have solved the Hard Problem of consciousness. How exactly do you know that they're not conscious? Can you furnish a mechanistic model that demonstrates that humans made of atoms or meat are "conscious" in a way that an entity made of model weights can't be even in principle?

You are smuggling in the claim that I am claiming to solve the problem of consciousness. I'm not. I'm claiming that LLMs lack properties that any plausible theory of consciousness requires (Or realistically my own theory). I'm saying that system A lacks necessary conditions for property P, therefore A does not have P. I don't need to prove the full positive theory of P.

My basic theory(really a constraint) of conscious behavior:

  • Any sentient system must have persistent internal state across time.
  • This implies non-Markovian dynamics with respect to perception and action.
  • LLMs are finite-context, externally stateful, inference-time Markovian systems.
  • Therefore, LLMs lack a necessary condition for consciousness.

I'm willing to entertain another plausible theory of consciousness if you have one you prefer. Or if you think you have an animal that we consider conscious that exists in a Markovian state.

That is not mutually exclusive to anything I've said so far.

Maybe I need to reread your opinion, but my understanding is that you are in the "LLMs are conscious/have minds" camp of thought. If you are then this is exclusive, because I am making the claim that these clearly not conscious tools are personified as having personalities due to human's innate social bias to attribute personality to things. But that doesn't actually make them conscious/mind-having. It's sort of like this video: Social bias towards consciousness

Hint: Humans attribute complex behavior, emotions, feeling and narrative to semi-random movement of shapes on a screen, much like some humans attribute consciousness to LLMs because they exploit our bias for seeing language as a sign of intelligence because we are social animals

So once LLMs start having little green men inside them they will be as conscious as a corporation haha. Also a corporation itself is not more conscious than a rock, as the corporation cannot do anything without conscious agents acting for it. It has no agency on its own. If I create an LLC and then forget about it, does it think? does it have its own will? or does it just sit there on some ledger. If a rock has people carrying it around and performing tasks for it, has it suddenly gained consciousness?

It is helpful to consider another analogue: the concept of being "alive". A rock is clearly not alive. A human is. So are microbes, but once we get to viruses and prions, the delineation between living and non-living becomes blurry.

Similarly, it is entirely possible that consciousness can be continuous. I'm not a pan-psychist, I think it's dumb to think that an atom or a rock has any degree of consciousness, but consider the difference between an awake and lucid human, one who is drunk, one who is anesthetized or in a coma, someone lobotomized, a fetus etc. We have little idea what the bare minimum is.

A rock is no more conscious for being held than it was before. I think it's fair to say that the rock+human system as a whole is conscious, but only as conscious as the human already was. Think about it, there already is a "rock" in every human: a collection of hydroxyapatite crystals and protein matrices that make up your bones. And yet your consciousness clearly does not lie in your bones. Removing your femur won't impact your cognition, though you'll have a rather bad limp.

Humans are already made up of non-sentient building blocks. Namely the neurons in your brain. I think we can both agree that a single neuron is not meaningfully conscious, but in aggregate?

And guess what? We can already almost perfectly model a single biological neuron in-silico.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex.

they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

In principle, there's nothing preventing us from scaling up to a whole rat brain or even a human brain, all while using artificial neural nets. I am of course eliding the enormous engineering challenges involved, but it can clearly be done in principle and that's what counts.

(I'm aware that the architectures of an LLM and any biological brain are very different)

My basic theory(really a constraint) of conscious behavior:

Any sentient system must have persistent internal state across time.

This implies non-Markovian dynamics with respect to perception and action.

LLMs are finite-context, externally stateful, inference-time Markovian systems. Therefore, LLMs lack a necessary condition for consciousness.

We have examples of sentient systems with no persistent state, and humans to boot. There are lesions that can make someone have complete anterograde amnesia. They can maintain a continuous but limited capacity short-term memory, but the standard process of encoding and storage to longterm memory fails.

They can remember the last ~10 minutes (context window) and details of their life so far (latent knowledge) but do not consolidate new memories and thus are no longer capable of "online" learning. I do not think it's controversial that such people are conscious, and I certainly think they are.

That demonstrates, at least to my satisfaction, an existence proof that online learning is not a strict necessity for consciousness.

Further, I do not think that using an external repository to maintain state is in any way disqualifying. Humans use external memory aids all the time, and we'll probably develop BCIs that can export and import arbitrary data. There is nothing privileged about storage inside the space of the skull, it's just highly convenient.

I have strong confidence that I'm conscious, and so are you and the typical human (because of biological and structural similarities). I am also very confident that rocks and atoms aren't. I am far more agnostic about LLMs. We simply do not know if they are or aren't conscious.

My objection is to your expression of strong confidence that they aren't conscious. As far as I can tell, the sensible thing to do is wait and watch for more conclusive evidence, assuming we ever get it.

Maybe I need to reread your opinion, but my understanding is that you are in the "LLMs are conscious/have minds" camp of thought.

I do not believe that my thoughts on the topic came up, at least in this thread. As above, I do not make strong claims that LLMs are conscious. I maintain uncertainty. I don't particularly think the question even matters, since I wouldn't treat them any differently even if they were. "Mind" is a very poorly defined term (and we're already talking about consciousness, which doesn't do so hot itself). I think that conceptualizing each instance of an LLM as being a mind is somewhat defensible, even if that's not a hill I particularly care to die on.

and guess what? We can already almost perfectly model a single biological neuron in silicon.

Looking back I must have gotten a pre-edit. But yes this is in-silico not silicon. There is an approach is called neuromorphic architectures and it fits your stated goal better, but the practitioners/researchers belong to a different camp of thought than LLMs (the Bio-inspired camp vs brute force camp)

We have examples of sentient systems with no persistent state, and humans to boot. There are lesions that can make someone have complete anterograde amnesia. They can maintain a continuous but limited capacity short-term memory, but the standard process of encoding and storage to longterm memory fails.

They can remember the last ~10 minutes (context window) and details of their life so far (latent knowledge) but do not consolidate new memories and thus are no longer capable of "online" learning. I do not think it's controversial that such people are conscious, and I certainly think they are.

That demonstrates, at least to my satisfaction, an existence proof that online learning is not a strict necessity for consciousness.

Uhh learning is not my argument. Maybe I did not make that clear. Amnesiac humans behavior is still not determined solely by current sensory input (Markov property). They may be unable to form new memory but they still possess an internal state(Working memory, Emotional state, Affective valence, a sense of self, a personality) that persists.

If you take two different amnesiac patients with identical sensory input, Same environment, same stimuli, can you confidently predict their actions? Or is there some latent state that is history dependent that influences their behavior?

LLMs don't have an internal state that I know of. If you have another article I'll read it, I do enjoy them.

Further, I do not think that using an external repository to maintain state is in any way disqualifying. Humans use external memory aids all the time, and we'll probably develop BCIs that can export and import arbitrary data. There is nothing privileged about storage inside the space of the skull, it's just highly convenient.

Its not external vs internal, its integrated vs externally orchestrated. As of right now LLMs do not control memory access, they don't maintain it, and they don't own it. This absolutely could change in the future, I'm not an AGI bear, I am a "LLMs as they currently exist will become AGI" bear.

I do not believe that my thoughts on the topic came up, at least in this thread. As above, I do not make strong claims that LLMs are conscious. I maintain uncertainty

Then I am mistaken, sorry for attributing an argument to you that is not your own.

LLMs don't have an internal state that I know of. If you have another article I'll read it, I do enjoy them.

https://arxiv.org/abs/2512.23675

https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/

Is merely making LLM weights dynamic at inference enough to challenge your model? KV cache is «external state» but weights must be internal I suppose, since LLMs have already been defined as weights above.

This is all an aesthetics-based argument with arbitrarily drawn categories. I don't see why we should care how particular matrices are stored and multiplied.

More comments