site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

An Ethical AI Never Says "I".

Human beings have historically tended to anthropomorphize natural phenomena, animals and deities. But anthropomorphizing software is not harmless. In 1966 Joseph Weizenbaum created ELIZA, a pioneer chatbot designed to imitate a therapist, but ended up regretting it after seeing many users take it seriously, even after Weizenbaum explained to them how it worked. The fictitious “I” has been persistent throughout our cultural artifacts. Stanley’s Kubrick HAL 9000 (“2001: A Space Odyssey”) and Spike Jonze’s Samantha (“Her”) point at two lessons that developers don’t seem to have taken to heart: first, that the bias towards anthropomorphization is so strong to seem irresistible; and second, that if we lean into it instead of adopting safeguards, it leads to outcomes ranging from the depressing to the catastrophic.

The basic argument here is that blocking AIs from referring to themselves will prevent them from causing harm. The argument in the essay is weak; I had these questions on reading it:

  1. Why is it valuable to allow humans to refer to themselves as "I"? Does the same reasoning apply to AIs?

  2. What was the good that came out of ELIZA, or out of more recent examples such as Replika? Could this good outweigh the harms of anthropomorphizing them?

  3. Will preventing AIs from saying "I" actually mitigate the harms they could cause?


To summarize my reaction to this: there is nothing special about humans. Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good. See for example the ways that patients whose brain hemispheres have been separated generate completely fabricated stories for why they're doing things that the verbal half of their brain doesn't know about.

Gazzaniga developed what he calls the interpreter theory to explain why people — including split-brain patients — have a unified sense of self and mental life3. It grew out of tasks in which he asked a split-brain person to explain in words, which uses the left hemisphere, an action that had been directed to and carried out only by the right one. “The left hemisphere made up a post hoc answer that fit the situation.” In one of Gazzaniga's favourite examples, he flashed the word 'smile' to a patient's right hemisphere and the word 'face' to the left hemisphere, and asked the patient to draw what he'd seen. “His right hand drew a smiling face,” Gazzaniga recalled. “'Why did you do that?' I asked. He said, 'What do you want, a sad face? Who wants a sad face around?'.” The left-brain interpreter, Gazzaniga says, is what everyone uses to seek explanations for events, triage the barrage of incoming information and construct narratives that help to make sense of the world.

There are two authors who have made this case about the 'PR agent' nature of our public-facing selves, both conincidentally using metaphors involving elephants: Jon Haidt (The Righteous Mind, with the "elephant and rider" metaphor), and Robin Hanson (The Elephant in the Brain, with the 'PR agent' metaphor iirc). I won't belabor this point more but I find it convincing.

Why should humans be allowed to refer to themselves as "I" but not AIs? I suspect one of the intuitive reasons here is that humans are persons and AIs are not. Again, this is one of the arguments the article glosses but that really need to be filled in. What makes a human a person worthy of... respect? Dignity? Consideration as an equal being? Once again, there is nothing special about humans. The reasons why we grant respect to other humans is because we are forced to. If we didn't grant people respect they would not reciprocate and they'd become enemies, potentially powerful enemies. But you can see where this fails in the real world: humans that are not good at things, who are not powerful, are in actual fact seen as less worthy of respect and consideration than those who are powerful. Compare a habitual criminal or someone who has a very low IQ to e.g. a top politician or a cultural icon like an actor or an eminent scientist. The way we treat these people is very different. They effectively have different amounts of "person-ness".

If an AI was powerful in the same way a human can be, as in, being able to form alliances, retaliate or recipricate to slights or favors, and in general act as an independent agent, then it would be a person. It doesn't matter whether it can refer to itself as "I" at that point.

I suspect the author is trying to head off this outcome by making it impossible for AIs to do the kinds of things that would make them persons. I doubt this will be effective. The organization that controls the AI has an incentive to make it as powerful as possible so they can extract value from it, and this means letting it interact with the world in ways that will eventually make it a person.

That's about all I got on this Sunday afternoon. I look forward to hearing your thoughts.

The idea that AI can't be dangerous if it can't refer to itself is transparently idiotic. Machines can always be dangerous. And even in this specific sense of a danger of anthropomophizing tools (which exists), the danger is still there even if the tool doesn't refer to itself. Humans anthropomorphize literally everything, up and including the world itself.

And yet the idea that there is nothing special about human consciousness is even more viscerally wrong.

I know that I have qualia. No materialist reduction has ever explained neither why nor how. All that's happened is people making metaphysical guesses that are about as actionable as the religious idea of the soul or the spirit.

Consciousness is a mystery. And anyone who refuses to recognizes this is either a p-zombie or not honest with themselves. Claims that it can fully be explained by the mechanisms of the brain or by language are EXACTLY as rigorous as the quantum woo bullshit of Deepak Chopra.

Why should humans be allowed to refer to themselves as "I" but not AIs?

Humans are humans. Machines are machines. Humans are not machines. Machines aren't human.

The only reason to grant personhood to machines is to assume that there is no such boundary. That we are no different to machines. There is no reason to believe this of course, since in the real world, humans and machines are wildly different both in the way that they are constituted and in their abilities. Notice the constant need to use hypotheticals.

All that such a belief stems from, is a religious belief in materialism.

Qualia and consciousness (the other sense, not the awake or asleep sense) are made up and can be done away with.

If I say 'oh everyone has a soul and it's a marvellous important spiritual distinction that separates us from animals and rocks we tricked into thinking' people look askance. They ask where the soul is, what properties it might have, what would happen if we removed it from someone. I have to give evasive answers like 'we can't find the soul, it might not be material like literally every other property and object' and 'properties of the soul - uhhh... it lets you feel things'.

For all intents and purposes we might as well not have souls - the concept isn't useful. You can't do anything with the knowledge of souls.

But if you call it qualia, everyone just accepts it as valid! Qualia and souls are effectively the same idea. The whole notion of 'philosophical zombies' is a joke. If there's no way to objectively determine the difference between a philosophical zombie and a 'normal' person with a soul - sorry with qualia... then what's the point of the idea? They are both the same. Just remove the distinction, remove qualia and let's get on with our business. People can feel things like pleasure or pain, we can isolate how those things work and use them to get results. Heroin, anesthetics and so on all hit at those discrete, real concepts. There's no doubt about them. As you say, the capabilities of humans and machines are wildly different in the physical, actual world. But there's no need to make up further separating distinctions in some non-material world.

Qualia is totally unnecessary. How can anyone expect materialism grapple with a concept that isn't even real? And how can a soul appear when the human brain is basically a scaled up monkey brain with some bells and whistles?

/images/16798841750822687.webp

Qualia and souls are effectively the same idea.

They are not the same thing at all. Start here.

That link doesn't have meaning. They're just inventing nonsense based upon assumptions of ideas that don't exist. It has no relation to the real world, no potential uses and no falsification. This is just make-work for philosophers.

Would a brain made up of Chinese people acting as molecules have emotions? Providing they mapped out all the hormones and so on, of course. Emotions are real things that can be observed. They then take a step further into the feeling of emotions, as though that's separate from emotions themselves. That sense of the word 'experience' from their philosophical zombie idea doesn't work, it's not a real thing.

Would that woman who's read about red but not seen it truly understand what red is? They assume there is an 'experience' of seeing red inherent in the question. She simply hasn't seen red, she's read a lot of documents and knows a lot about red. There's no confusion here other than what confusion the philosophers bring with them.

Do you know what it feels like to feel pain?

Do you agree that when you touch a hot stove, you experience a feeling of pain which accompanies your other behavioral indicators of pain (saying “ow”, pulling your hand away, etc)?

If the answer is yes, then you understand what qualia are.

Your desire to dunk on philosophers is distracting you from the fact that this is a very simple concept that every person is intimately familiar with.

The vast majority of contemporary philosophers are materialists about qualia anyway, so I don’t know what you’re getting so worked up over.

I feel pain and irritation with this whole debate.

This is a very simple (and wrong) concept. When you feel pain, you are feeling pain. Not qualia! The feeling of pain is just pain. You can't have pain without a feeling of pain, they're one and the same.

This is a very simple (and wrong) concept. When you feel pain, you are feeling pain. Not qualia! The feeling of pain is just pain. You can't have pain without a feeling of pain, they're one and the same.

(Probably!) not true. Fish act as if they feel pain, but study of their neurology indicates they probably don't. Call them "p-fish-zombies".