site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

That is not actually true. It could merely be well trained or even overfit on a statistical distribution of chess moves such that it can proffer a valid move. You could do this with an SVM or a DQN. Nobody is saying either is conscience.

The larger point of my comment is that you actually cannot prove what is going on in an LLMs internal weights. You can theorize it has an internal model but to prove that it does is currently impossible.

That is not actually true. It could merely be well trained or even overfit on a statistical distribution of chess moves such that it can proffer a valid move.

Well-trained or overfitting on a statistical distribution of chess moves is a model of chess, though. A model that's wrong (like most models), and one that's likely not very useful (like some models), but that doesn't make it not a model.

Refer to the 2nd part of my comment about providing evidence of the internal workings of the LLM to prove it has an internal model.

But both of those models are trained on chess games explicitly, an LLM to my knowledge is not.

Refer to the 2nd part of my comment about providing evidence of the internal workings of the LLM to prove it has an internal model.

Again, the proof is in the external behavior. To be able to predict something external at a rate better than chance requires some model somewhere. We know that these LLMs don't have an external model. Therefore it must have an internal one. Much like how, say, a 5 year old who can throw a ball towards home plate certainly has some internal model of physics, as proven by the fact that he can, at a rate better than chance, throw the ball towards home plate instead of at 3rd base or straight up or just dropping it on the mound. We don't need to plant electrodes in his brain or do some fMRI studies to know this, the proof of the pudding is in the eating.

Yeah thats not proof, thats a theory. There are other theories about what the LLM is doing and they are just as explanatory as yours is. You have run no experiments to isolate those alternatives and test whether or not they exist. You have run no ablation studies and no studies to attempt to isolate co-occurring variables. It is by definition a theory. Hence why I asked for proof, because I am certain you have none.

We can prove the child has an internal model of physics because we have 7 billion humans, including our own selves, we can extrapolate our internal abilities across a generalized set of all humans. We are able to perform other activities with as much success as a baseball catch, that strongly hints at an internal physics model. There is overwhelming evidence.

Yeah thats not proof, thats a theory. There are other theories about what the LLM is doing and they are just as explanatory as yours is. You have run no experiments to isolate those alternatives and test whether or not they exist. You have run no ablation studies and no studies to attempt to isolate co-occurring variables. It is by definition a theory. Hence why I asked for proof, because I am certain you have none.

I'm not sure where the disconnect is here. This is a simple logical question, not an empirical one. There's no need to check for some physical representation of a model, because making accurate predictions requires an implicit model, QED. I've yet to see anyone suggest an alternative - certainly not in this thread - and I'm not sure what alternative explanation could exist, logically.

We can prove the child has an internal model of physics because we have 7 billion humans, including our own selves, we can extrapolate our internal abilities across a generalized set of all humans.

But we don't need 7 billion humans or even any other human than that child to conclude this. If we landed on an alien planet and observed an alien doing this, we would also know that it had an internal model of physics. If someone made a robot that could do this, we would know it as well.

An example of what you are proposing as evidence: we have an indestructible radio, you can’t open it. It does radio things. You are proposing that empirically since a voice comes out of this radio then it must have a tiny man inside of it. There is no other “evidence”. And the proof? Well its empirically observable, what do you mean there is no tiny man inside the box??

It’s a bad argument and bad science

EDIT: What are you actually using as a definition of internal model? It is imprecise in casual conversation but very specific in technical ones.

An example of what you are proposing as evidence: we have an indestructible radio, you can’t open it. It does radio things. You are proposing that empirically since a voice comes out of this radio then it must have a tiny man inside of it. There is no other “evidence”. And the proof? Well its empirically observable, what do you mean there is no tiny man inside the box??

This has no relationship to what I wrote, as far as I can tell. Could you explain the connection?

I would say, logically, because a voice comes out of this radio, then it must have some ability to vibrate air. And depending on the nature of the voice and the words, I could draw some conclusions - e.g. if it reported on news that happened after the radio entered my presence, that it must have some ability to take in information from faraway. I'm not sure where you're getting this idea that my logic would conclude that there's a tiny man. Again, I can't figure out how that has any relationship at all with what I wrote, and I'm curious what your explanation of that relationship is.

EDIT:

EDIT: What are you actually using as a definition of internal model? It is imprecise in casual conversation but very specific in technical ones.

Something that is simpler than the actual thing being modeled, but which can be used to help make predictions about the actual thing.

Could you explain the connection?

You see a black box, you observe inputs and outputs empirically and you derive an explanation for what is inside the box without actually being able to check.

Something that is simpler than the actual thing being modeled, but which can be used to help make predictions about the actual thing.

I’m understanding your definition as:

  1. compress structure from the environment/domain
  2. support prediction about that domain.

Does that track?

If so we have wildly different definitions. I would say your definition is very very broad, something like logistic regression or a kalman filter would have an internal model.

My definition is is very RL/latent space/World Model-esque: an internal model is a learned or encoded internal structure that represents the state and transition dynamics of an external system sufficiently well to support counterfactuals, simulation, planning, or action prediction.

Which is why your ball throwing example confuses me, under my definition, yes kids clearly have an internal model for catching a baseball but it is very controversial/not settled that an LLM has an internal model for chess. I think saying kids can catch a baseball using essentially a compressed predictive statistics process is cognitively incorrect.

More comments