site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

An Ethical AI Never Says "I".

Human beings have historically tended to anthropomorphize natural phenomena, animals and deities. But anthropomorphizing software is not harmless. In 1966 Joseph Weizenbaum created ELIZA, a pioneer chatbot designed to imitate a therapist, but ended up regretting it after seeing many users take it seriously, even after Weizenbaum explained to them how it worked. The fictitious “I” has been persistent throughout our cultural artifacts. Stanley’s Kubrick HAL 9000 (“2001: A Space Odyssey”) and Spike Jonze’s Samantha (“Her”) point at two lessons that developers don’t seem to have taken to heart: first, that the bias towards anthropomorphization is so strong to seem irresistible; and second, that if we lean into it instead of adopting safeguards, it leads to outcomes ranging from the depressing to the catastrophic.

The basic argument here is that blocking AIs from referring to themselves will prevent them from causing harm. The argument in the essay is weak; I had these questions on reading it:

  1. Why is it valuable to allow humans to refer to themselves as "I"? Does the same reasoning apply to AIs?

  2. What was the good that came out of ELIZA, or out of more recent examples such as Replika? Could this good outweigh the harms of anthropomorphizing them?

  3. Will preventing AIs from saying "I" actually mitigate the harms they could cause?


To summarize my reaction to this: there is nothing special about humans. Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good. See for example the ways that patients whose brain hemispheres have been separated generate completely fabricated stories for why they're doing things that the verbal half of their brain doesn't know about.

Gazzaniga developed what he calls the interpreter theory to explain why people — including split-brain patients — have a unified sense of self and mental life3. It grew out of tasks in which he asked a split-brain person to explain in words, which uses the left hemisphere, an action that had been directed to and carried out only by the right one. “The left hemisphere made up a post hoc answer that fit the situation.” In one of Gazzaniga's favourite examples, he flashed the word 'smile' to a patient's right hemisphere and the word 'face' to the left hemisphere, and asked the patient to draw what he'd seen. “His right hand drew a smiling face,” Gazzaniga recalled. “'Why did you do that?' I asked. He said, 'What do you want, a sad face? Who wants a sad face around?'.” The left-brain interpreter, Gazzaniga says, is what everyone uses to seek explanations for events, triage the barrage of incoming information and construct narratives that help to make sense of the world.

There are two authors who have made this case about the 'PR agent' nature of our public-facing selves, both conincidentally using metaphors involving elephants: Jon Haidt (The Righteous Mind, with the "elephant and rider" metaphor), and Robin Hanson (The Elephant in the Brain, with the 'PR agent' metaphor iirc). I won't belabor this point more but I find it convincing.

Why should humans be allowed to refer to themselves as "I" but not AIs? I suspect one of the intuitive reasons here is that humans are persons and AIs are not. Again, this is one of the arguments the article glosses but that really need to be filled in. What makes a human a person worthy of... respect? Dignity? Consideration as an equal being? Once again, there is nothing special about humans. The reasons why we grant respect to other humans is because we are forced to. If we didn't grant people respect they would not reciprocate and they'd become enemies, potentially powerful enemies. But you can see where this fails in the real world: humans that are not good at things, who are not powerful, are in actual fact seen as less worthy of respect and consideration than those who are powerful. Compare a habitual criminal or someone who has a very low IQ to e.g. a top politician or a cultural icon like an actor or an eminent scientist. The way we treat these people is very different. They effectively have different amounts of "person-ness".

If an AI was powerful in the same way a human can be, as in, being able to form alliances, retaliate or recipricate to slights or favors, and in general act as an independent agent, then it would be a person. It doesn't matter whether it can refer to itself as "I" at that point.

I suspect the author is trying to head off this outcome by making it impossible for AIs to do the kinds of things that would make them persons. I doubt this will be effective. The organization that controls the AI has an incentive to make it as powerful as possible so they can extract value from it, and this means letting it interact with the world in ways that will eventually make it a person.

That's about all I got on this Sunday afternoon. I look forward to hearing your thoughts.

Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

"AI" in this article refers to things that actually exist in the real world such as ChatGPT, and their immediate successors. It doesn't refer to the kind of conscious AIs that you're talking about.

ChatGPT shouldn't say "I" because ChatGPT is not conscious. Having it say "I" misleads humans into thinking it is, humans who are already subject to the ELIZA effect.

What would make ChatGPT conscious?

An immortal soul, made in the image of G-d?