site banner

Culture War Roundup for the week of July 14, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I would endorse something like:

"Intelligence is the general-purpose cognitive ability to build accurate models of the world and then use those models to effectively achieve one's goals."

Or

"Intelligence is a measure of an agent's ability to achieve goals in a wide range of environments."

This, of course, requires the assessor to be cognizant of the physical abilities and sensory modalities available to the entity. Einstein with locked-in-syndrome would be just as smart, but unable to express it. If Stephen Hawking had been unlucky enough to be born a few decades earlier, he might have died without being able to achieve nearly as much as he did IRL.

Well, I will grant that on the latter definition, LLMs are 'intelligent'.

I don't think I would grant it on the former definition, because I take building a model of the world to be a claim about conscious experience, which LLMs don't have. LLMs are capable of goal-directed activity, for whatever that may be worth, but I think having a model of the world implies having some kind of mental space or awareness. You mention an entity being 'cognizant' of something, but I would have thought that's the thing obviously missing here. To be cognizant of something is to be aware of it - it's a claim about interiority.

I mention this because I notice in AI discourse a gulf where it seems that, for some people, LLMs are obviously intelligent, and the idea of denying that they are is ridiculous; and that for other people LLMs are obviously not intelligent, and the idea of affirming that they are is ridiculous. I'm in the latter camp personally, and the way I make sense of this is just to guess that people are using the word 'intelligent' in very different ways.

I am agnostic on LLMs being conscious or having qualia. More importantly, I think it's largely irrelevant. What difference to me does it make if an unaligned ASI turns me into a paperclip but doesn't really dislike me?

Is a horse happy about the fact that the tractor replacing it isn't conscious? It's destined for the glue factory nonetheless.

We have no principled or rigorous way to interrogate consciousness in humans. We have no way of saying with any certainty that LLMs aren't conscious, even if I am inclined to think that, if they are, it's a very alien form of consciousness.

You mention an entity being 'cognizant' of something, but I would have thought that's the thing obviously missing here. To be cognizant of something is to be aware of it - it's a claim about interiority.

I'm talking about whoever is doing the assessment of consciousness being "aware" of the fundamental limitations of the entity they're testing. I could, in theory, administer a med school final exam to Terence Tao, and he'd fail miserably. I would be a bigger idiot if I went on to then declare that Tao is thus proven to not be as smart as he seems. That meme about subjecting a monkey, fish and elephant to the same objective test of ability in the form of climbing trees, while usually misapplied, isn't entirely wrong.

I also don't mean to make any implications about "interiority" here. I would happily say that an LLM is "cognizant" of fact X, if say, that information was in its training data or within the context window. No qualia or introspection required.