site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

If we can't test it, it may as well not exist. Having feelings, alone and distinct from all outcomes and outputs, is not a test.

This feels like the same kind of overly simplistic reductionism that the behaviorists engaged in.

I think internal mental states are a sensible thing to talk about. There are chatbots we can be very sure have no internal mental states: The very simple ones (like Eliza), but also the ones that would take more space than the entire universe like Ned Block's Blockhead thought experiment of a chatbot consisting of a giant lookup table of every possible sentence of some arbitrary length.

But for entities between those two extremes, we have to learn more about how they're actually working in order to say whether they have internal mental states or not.

While it is far from definitive, I remember the interpretability research on ChessGPT (an LLM trained only on chess games in chess notation), found that there was representation of the state of the chess board inside the LLM, because it turns out that the best way to predict the next move in a game of chess is to realize that there is a chess board with pieces on it, and particular moves are legal for certain pieces. That is, you must be able to reverse engineer chess to predict the next token in chess notation.

I wonder what the implications of that are for LLMs that do a reasonable job of replicating the emotional arc of a conversation with a person? I don't actually think it is totally implausible that the best way to predict what a human will say next is to essentially reverse engineer human cognition. Maybe what an LLM is doing when it plays the part of helpful assistant is that it is actually doing something very analogous to what a helpful human assistant's brain would be doing under the same circumstances?