site banner

Culture War Roundup for the week of March 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Jesus Christ, some people won't see the Singularity coming until they're being turned into a paperclip.

Nuh uh, this machine lacks human internal monologues and evidence of qualia, you insist, as it harvests the iron atoms from your blood.

I find this argument strange, because being able to kill me is not evidence of a machine being conscious or intelligent. I could go and get myself killed by an LAW today, and if you asked me as I bled out and died, my body torn in two by an autonomously-fired rocket, I would still insist that the machine that killed me is not a person and does not possess internal experience. And I would be correct.

Whatever qualia are or are not, whether you think they're important or not, the question of qualia cannot be resolved or made irrelevant by a machine killing people. I should have thought that's obvious.

And to upscale from that a bit, I find it entirely imaginable that someone or other might invent autonomous, self-directed, self-replicating machines with no conscious experience, but which nonetheless outcompete and destroy all conscious beings. I can imagine a nightmare universe which contains no agents to experience anything, only artificial pseudo-agents that have long since destroyed all conscious agents.

There are already some novels with that premise, right? It doesn't use robots specifically, but isn't that the premise of Blindsight - that perhaps consciousness is evolutionarily maladaptive, and the universe will be inherited by beings without internal experience?

Thus I'm going to give the chad "yes". Maybe one day I get killed by a robot, and maybe that robot is not conscious and has no self-awareness. That it killed me proves nothing.

Thus I'm going to give the chad "yes". Maybe one day I get killed by a robot, and maybe that robot is not conscious and has no self-awareness. That it killed me proves nothing.

I think you're burying the lede here, it matters how a machine kills you.

If you get run over by a Waymo robotaxi because of a sensor glitch, you're dead, but it's an entirely different story if some misaligned AGI seized control of all robotaxis and systematically started murdering humans. The former is machine error and/or stupidity. The latter is intelligence.

I don't really care about whether they have qualia or consciousness, and make no positive claims in that regard, my argument is that those factors, or even 'reasoning' in a human manner, matter not a jot when it comes to the prospect of creating entities far more intelligent than us and which could kill us using that intelligence. It could well lack a grasp of the ineffable redness of red, but it can still act in ways that increase rgb(255,0,0) as measured by its visual sensors when it shoots you.

Something like dumb grey-goo, or synthetic biological equivalents, are almost certainly going to be the products of intelligence.

You're perfectly free to not care about consciousness per se, but you were objecting to the hypothetical of someone objecting that a machine isn't conscious even as it kills them. But it's not clear to me why the hypothetical objector isn't correct. Consciousness clearly matters to that person, and the machine's lethality in no way indicates that it's conscious.

I am fully prepared to grant that a non-conscious machine that does not possess internal experience could be extremely complex and capable of destroying humanity.

I'm not sure how likely I think such a machine is, and I am skeptical that any such machine is likely to be produced in the next century, but it is philosophically conceivable. I just, to turn your argument back at you, don't really care about the hypothetical non-conscious supercomputer. It does not tell me anything about the thing I care about, which is personhood. So what if a supercomputer could conceivably destroy humanity? I've been able to imagine that for my entire life. Skynet is not a novel proposition. So... I just don't find the hypothetical genocide machine to be philosophically interesting.

To the extent that there's a controversy here, I suppose it's the practical question. "Would Skynet be conscious?" is a question you don't find interesting. "Would Skynet be capable of killing everyone?" is a question I don't find interesting. But "are we going to build Skynet?" is perhaps more relevant. As I said, I'm skeptical, and presumably you're more... I feel like 'optimistic' is the wrong word here, but you would estimate greater likelihood of that happening, I suppose?

I find this argument strange, because being able to kill me is not evidence of a machine being conscious or intelligent.

Thus I'm going to give the chad "yes". Maybe one day I get killed by a robot, and maybe that robot is not conscious and has no self-awareness. That it killed me proves nothing.

It seems to me that you pretty much agree with the commenter you're responding to, that it simply doesn't matter if the AI has consciousness, qualia, or self-awareness. Intelligence, though, is something else. And whether or not the AI has something akin to internal, subjective experience, if it's intelligent enough, then it's both impressive and potentially dangerous. And that's the part that matters.

Well, I'm not sure that's how I would define 'intelligence'. I don't think that the laptop I am currently writing this message on is intelligent, even though it is capable of calculations that I am not.

But I certainly grant that a non-conscious machine could be very dangerous. That just doesn't strike me as the interesting part of AI hypotheticals.

I'd say that, if your laptop is running Doom with its artificial intelligence runtimes controlling the way an imp moves around on your screen, then it's displaying intelligence. Likewise if it's running a local LLM to produce text based on text-based inputs. I see intelligence as the ability to solve complex problems, such as making brown pixels appear as a demon attempting to murder you or producing text that appears as a conversation partner in response to a prompt. Where exactly that line for "complex" is drawn is admittedly pretty vague and correctly controversial, though.

I think many parts of AI hypotheticals are interesting, including whether or not they're conscious or have agency and whether or not they're dangerous. But in terms of real-world applications of modern and upcoming AI, I think the danger/use part is far more interesting than the consciousness part. I expect that, for the near future, it's highly unlikely that we'll get AI that we can say with any meaningful level of confidence is "conscious" or has subjective experience or whatever, but it's almost certain that we'll get AI that's useful/dangerous (arguably, this has already happened). I think there's a high chance we'll get AI that appears to have agency in the near future as well.

I suppose I grant that there's a relatively 'low' use of the word intelligence that applies to things like the Doom AI. When I talk about the AI in a video game, I'm calling it 'intelligence', and in a sense I mean it.

But I don't see that as intelligence in a 'high' sense, if that makes sense? When I play a video game, the monsters are in some sense 'intelligent', but they are not intelligent in the sense that, for example, would be associated with possessing rights.

Maybe we should distinguish our terminology somewhat - perhaps the Doom demons are intelligent but not sapient? And I previously used the word 'intelligence' synonymously with sapience, rather than with this lower level of simulated agency?

In any case, I think I broadly agree with your conclusion. I think it is extraordinarily unlikely that in the near future we will produce artificial beings that can reasonably be said to be conscious, sapient, or an equivalent. I don't think we're going to produce any artificial people. But we are going to produce, and right now are in fact producing, machines that are capable of complex behaviour, and that these machines offer both potential and risk.

Maybe we should distinguish our terminology somewhat - perhaps the Doom demons are intelligent but not sapient? And I previously used the word 'intelligence' synonymously with sapience, rather than with this lower level of simulated agency?

I'd agree with this characterization. I also would say that, personally, I don't associate intelligence with deserving rights. That opens a door that I'd rather keep quite shut. I don't think these terms are completely clear cut, but it seems fair enough to say sapience, sentience, consciousness, ability to suffer, are things id associate with deserving rights.

You're correct in this assertion. A malevolent AGI or ASI does not need to be conscious or have qualia to pose a threat. It doesn't even have to think like humans, as long as it thinks.

If someone can look at a machine that is better at all of {math, coding, medicine, astronomy...} than the average human and then claim that they're not intelligent, what can you really say at that point.

Half the metrics for generality of intelligence would rule out humans as being general intelligences!

If someone can look at a machine that is better at all of {math, coding, medicine, astronomy...} than the average human and then claim that they're not intelligent, what can you really say at that point.

"I need better metrics."

But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured.
The second step is to disregard that which can't easily be measured or given a quantitative value.
The third step is to presume that what can't be measured easily really isn't important.
The fo[u]rth step is to say that what can't be easily measured really doesn't exist. This is suicide.
-- Daniel Yankelovich