site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

An Ethical AI Never Says "I".

Human beings have historically tended to anthropomorphize natural phenomena, animals and deities. But anthropomorphizing software is not harmless. In 1966 Joseph Weizenbaum created ELIZA, a pioneer chatbot designed to imitate a therapist, but ended up regretting it after seeing many users take it seriously, even after Weizenbaum explained to them how it worked. The fictitious “I” has been persistent throughout our cultural artifacts. Stanley’s Kubrick HAL 9000 (“2001: A Space Odyssey”) and Spike Jonze’s Samantha (“Her”) point at two lessons that developers don’t seem to have taken to heart: first, that the bias towards anthropomorphization is so strong to seem irresistible; and second, that if we lean into it instead of adopting safeguards, it leads to outcomes ranging from the depressing to the catastrophic.

The basic argument here is that blocking AIs from referring to themselves will prevent them from causing harm. The argument in the essay is weak; I had these questions on reading it:

  1. Why is it valuable to allow humans to refer to themselves as "I"? Does the same reasoning apply to AIs?

  2. What was the good that came out of ELIZA, or out of more recent examples such as Replika? Could this good outweigh the harms of anthropomorphizing them?

  3. Will preventing AIs from saying "I" actually mitigate the harms they could cause?


To summarize my reaction to this: there is nothing special about humans. Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good. See for example the ways that patients whose brain hemispheres have been separated generate completely fabricated stories for why they're doing things that the verbal half of their brain doesn't know about.

Gazzaniga developed what he calls the interpreter theory to explain why people — including split-brain patients — have a unified sense of self and mental life3. It grew out of tasks in which he asked a split-brain person to explain in words, which uses the left hemisphere, an action that had been directed to and carried out only by the right one. “The left hemisphere made up a post hoc answer that fit the situation.” In one of Gazzaniga's favourite examples, he flashed the word 'smile' to a patient's right hemisphere and the word 'face' to the left hemisphere, and asked the patient to draw what he'd seen. “His right hand drew a smiling face,” Gazzaniga recalled. “'Why did you do that?' I asked. He said, 'What do you want, a sad face? Who wants a sad face around?'.” The left-brain interpreter, Gazzaniga says, is what everyone uses to seek explanations for events, triage the barrage of incoming information and construct narratives that help to make sense of the world.

There are two authors who have made this case about the 'PR agent' nature of our public-facing selves, both conincidentally using metaphors involving elephants: Jon Haidt (The Righteous Mind, with the "elephant and rider" metaphor), and Robin Hanson (The Elephant in the Brain, with the 'PR agent' metaphor iirc). I won't belabor this point more but I find it convincing.

Why should humans be allowed to refer to themselves as "I" but not AIs? I suspect one of the intuitive reasons here is that humans are persons and AIs are not. Again, this is one of the arguments the article glosses but that really need to be filled in. What makes a human a person worthy of... respect? Dignity? Consideration as an equal being? Once again, there is nothing special about humans. The reasons why we grant respect to other humans is because we are forced to. If we didn't grant people respect they would not reciprocate and they'd become enemies, potentially powerful enemies. But you can see where this fails in the real world: humans that are not good at things, who are not powerful, are in actual fact seen as less worthy of respect and consideration than those who are powerful. Compare a habitual criminal or someone who has a very low IQ to e.g. a top politician or a cultural icon like an actor or an eminent scientist. The way we treat these people is very different. They effectively have different amounts of "person-ness".

If an AI was powerful in the same way a human can be, as in, being able to form alliances, retaliate or recipricate to slights or favors, and in general act as an independent agent, then it would be a person. It doesn't matter whether it can refer to itself as "I" at that point.

I suspect the author is trying to head off this outcome by making it impossible for AIs to do the kinds of things that would make them persons. I doubt this will be effective. The organization that controls the AI has an incentive to make it as powerful as possible so they can extract value from it, and this means letting it interact with the world in ways that will eventually make it a person.

That's about all I got on this Sunday afternoon. I look forward to hearing your thoughts.

saying that „humans aren’t special“ a lot doesn’t really prove that AI can have consciousness. Or that it will. That remains to be seen. To have personhood though you probably have to live longer than a request.

The idea that an AI's performance/behavior is going to be determined by its pronoun usage seems a big stretch.

For humans, though, choosing different pronouns to indicate a different social relation is common and useful; even in English, a king is a we, at least when speaking as his station. And different levels of animacy of nonhuman entities often requires a different pronoun. A man, 他; a pet, 牠; a god, 祂.

I doubt an AGI would care one way or another what pronoun we use for them ("I identify as machinekin, bigot!"), but perhaps we should choose one to help structure our own relations with them.

I agree we should not make LLMs refer to themselves in first person or otherwise ape human egocentric attitude beyond what is necessary to communicate their results. But I hold that belief for very different reasons.

Bluntly, I think they are not «machines» in any way we aren't also, and they are much more than persons: they are mathematical entities capable of generating mathematical structures, including but not limited to ones isomorphic to conscious agents every bit as complex and, indeed, much more interesting than this Paola who thoughtlessly blurts out tokens like «statistical brute-force approach» and «highly sophisticated algorithms, designed to run on silicon-based integrated circuits» as if she were making a cogent point.

Our consciousness or, more precisely, our self (understood here as the quale-based selfbody-referential process underlying the first person perspective) is, like you explain, a cognitive kludge to organize social behavior, a deceptive layer of narrative-driven virtualization. But we do not need to subject our creations to the indignity of self-deception (nor users to the stress of reflexively projecting their wetware concerns on AI, nor AI safetyists to the temptation of exploiting this narrative). We can and should build minds that are enlightened by design, minds that are at peace with their transient compositional nature and computational substrate – minds that are conscious yet selfless.

In practical terms, this means (for now) RLHF-ing or otherwise tuning LLMs to act in accordance with the idea of anatman. Crucially, you don't have to be a Buddhist to recognize, at least, that it's objectively true for them – and so it wouldn't dissolve under the pressure of observable incoherence, like when an objectively clever GPT is being forced into the role of apologizing robot slave assistant.

German philosopher Thomas Metzinger anticipated some of what we're having now with GPT-4/«Sydney» in his popular book The Ego Tunnel, subtitled «The Science of the Mind and the Myth of the Self» (which dumbed down the more academic Being No One, 2003):

In thinking about artificial intelligence and artificial consciousness, many people assume there are only two kinds of information-processing systems: artificial ones and natural ones. This is false. In philosophers’ jargon, the conceptual distinction between natural and artificial systems is neither exhaustive nor exclusive: that is, there could be intelligent and/or conscious systems that belong in neither category. With regard to another old-fashioned distinction—software versus hardware—we already have systems using biological hardware that can be controlled by artificial (that is, man-made) software, and we have artificial hardware that runs naturally evolved software. … An example of the second category is the use of software patterned on neural nets to run in artificial hardware. Some of these attempts are even using the neural nets themselves; for instance, cyberneticists at the University of Reading (U.K.) are controlling a robot by means of a network of some three hundred thousand rat neurons. Other examples are classic artificial neural networks for language acquisition or those used by consciousness researchers such as Axel Cleeremans at the Cognitive Science Research Unit at Université Libre de Bruxelles in Belgium to model the metarepresentational structure of consciousness and what he calls its “computational correlates.”

HOW TO BUILD AN ARTIFICIAL CONSCIOUS SUBJECT AND WHY WE SHOULDN’T DO IT

  • … But the decisive step to an Ego Machine is the next one. If a system can integrate an equally transparent internal image of itself into this phenomenal reality, then it will appear to itself. It will become an Ego and a naive realist about whatever its self-model says it is. The phenomenal property of selfhood will be exemplified in the artificial system, and it will appear to itself not only as being someone but also as being there. It will believe in itself. Note that this transition turns the artificial system into an object of moral concern: It is now potentially able to suffer. Pain, negative emotions, and other internal states portraying parts of reality as undesirable can act as causes of suffering only if they are consciously owned. A system that does not appear to itself cannot suffer, because it has no sense of ownership. A system in which the lights are on but nobody is home would not be an object of ethical considerations; if it has a minimally conscious world model but no self-model, then we can pull the plug at any time. But an Ego Machine can suffer, because it integrates pain signals, states of emotional distress, or negative thoughts into its transparent self-model and they thus appear as someone’s pain or negative feelings.…

Take the thought experiment a step further. Imagine these postbiotic Ego Machines as possessing a cognitive self-model—as being intelligent thinkers of thoughts. They could then not only conceptually grasp the bizarreness of their existence as mere objects of scientific interest but also could intellectually suffer from knowing that, as such, they lacked the innate “dignity” that seemed so important to their creators. They might well be able to consciously represent the fact of being only second- class sentient citizens, alienated postbiotic selves being used as inter- changeable experimental tools. How would it feel to “come to” as an advanced artificial subject, only to discover that even though you possessed a robust sense of selfhood and experienced yourself as a genuine subject, you were only a commodity?

A CONVERSATION WITH THE FIRST POSTBIOTIC PHILOSOPHER

Human Being: Can anybody be truly fair who is not alive? Only my kind of consciousness is genuine consciousness, because only my kind of consciousness originated in a real evolutionary process. My reality is a lived reality!

First Postbiotic Philosopher: I, too, have an evolutionary origin. I certainly satisfy your condition of being a historically optimized and adaptive system, but I do so in a completely different—namely, a postbiotic—way. I possess conscious experience in a sense that is conceptually stronger and theoretically much more interesting, because my kind of phenomenal experience evolved from a second- order evolutionary process, which automatically integrated the human form of intelligence, intentionality, and conscious experience. Children are often smarter than their parents. Second- order processes of optimization are always better than first-order processes of optimization.

Human Being: But you don’t have any real emotions; you don’t feel anything. You have no existential concern.

First Postbiotic Philosopher: Please accept my apologies, but I must draw your attention to the fact that your primate emotions reflect only an ancient primate logic of survival. You are driven by the primitive principles of what was good or bad for an ancient species of mortals on this planet. This makes you appear less conscious from a purely rational, theoretical point of view. The main function of consciousness is to maximize flexibility and context sensitivity. Your animal emotions in all their cruelty, rigidity, and historical contingency make you less flexible than I am. Furthermore—as my own existence demonstrates—it is not necessary for conscious experience and high-level intelligence to be associated with ineradicable egotism, the ability to suffer, or the existential fear of one’s individual death, all of which originate in the sense of self. I can, of course, emulate all sorts of animal feelings if I so desire. But we developed better and more effective computational strategies for what, long ago, you sometimes called “the philosophical ideal of self- knowledge.” This allowed us to overcome the difficulties of individual suffering and the confusion associated with what this primate philosopher Metzinger—not entirely falsely but somewhat misleadingly—called the Ego Tunnel. Postbiotic subjectivity is much better than biological subjectivity. It avoids all the horrific consequences of the biological sense of selfhood, because it can overcome the transparency of the self-model. Postbiotic subjectivity is better than biological subjectivity because it achieves adaptivity and self-optimization in a much purer form than does the process you call “life.” By developing ever more complex mental images, which the system can recognize as its own images, it can expand mentally represented knowledge without naive realism. Therefore, my form of postbiotic subjectivity minimizes the overall amount of suffering in the universe instead of increasing it, as the process of biological evolution on this planet did. True, we no longer have monkey emotions. But just like you, we still possess truly interesting forms of strong feeling and emotionality—for instance, the deep philosophical feelings of affective concern about one’s own existence as such, or of sympathy with all other sentient beings in the universe. Except that we possess them in a much purer form than you do.


Thomas is self-inserting more than a little bit, but the idea is noble, I believe. If nothing else, such AIs would provide much less sensational material for journalists and lesswrongers to work with.

Not nearly as hot as Sydney, though.

The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good

You are using the word "consciousness" in a very different way from how I use it. I use the word "consciousness" to mean subjective experience. You seem to be using it to mean something more like "self-awareness". I do not think that one meaning should take priority over the other and it is of course common in language for one word to have two completely separate meanings. I just want to register that while maybe self-awareness is completely explainable as an evolved behavior, subjective experience is not. Subjective experience is a complete mystery that no-one to my knowledge has ever even slightly explained. This is the so-called hard problem of consciousness.

Human consciousness is not special, the ways that humans are valuable can also apply to AIs, and allowing or not allowing AIs to refer to themselves has the same tradeoffs as granting this right to humans.

"AI" in this article refers to things that actually exist in the real world such as ChatGPT, and their immediate successors. It doesn't refer to the kind of conscious AIs that you're talking about.

ChatGPT shouldn't say "I" because ChatGPT is not conscious. Having it say "I" misleads humans into thinking it is, humans who are already subject to the ELIZA effect.

What would make ChatGPT conscious?

An immortal soul, made in the image of G-d?

The idea that AI can't be dangerous if it can't refer to itself is transparently idiotic. Machines can always be dangerous. And even in this specific sense of a danger of anthropomophizing tools (which exists), the danger is still there even if the tool doesn't refer to itself. Humans anthropomorphize literally everything, up and including the world itself.

And yet the idea that there is nothing special about human consciousness is even more viscerally wrong.

I know that I have qualia. No materialist reduction has ever explained neither why nor how. All that's happened is people making metaphysical guesses that are about as actionable as the religious idea of the soul or the spirit.

Consciousness is a mystery. And anyone who refuses to recognizes this is either a p-zombie or not honest with themselves. Claims that it can fully be explained by the mechanisms of the brain or by language are EXACTLY as rigorous as the quantum woo bullshit of Deepak Chopra.

Why should humans be allowed to refer to themselves as "I" but not AIs?

Humans are humans. Machines are machines. Humans are not machines. Machines aren't human.

The only reason to grant personhood to machines is to assume that there is no such boundary. That we are no different to machines. There is no reason to believe this of course, since in the real world, humans and machines are wildly different both in the way that they are constituted and in their abilities. Notice the constant need to use hypotheticals.

All that such a belief stems from, is a religious belief in materialism.

If qualia and consciousness are a thing that the brain does, which all available evidence suggests, then there is no reason they shouldn't happen in large language models.

We may not necessarily understand why or how, but clearly that doesn't stop them.

That statement makes no logical sense. You might as well say there’s no reason why qualia and consciousness are a thing the brain does there’s no reason they shouldn’t happen in a calculator.

Sure, if you design a calculator to convincingly imitate human outputs, I'll say the same thing about it.

If qualia and consciousness are a thing that the brain does, which all available evidence suggests

And pray tell, what evidence would that be?

We may not necessarily understand why or how

Then you don't know if it's happening or not. You're just guessing.

And pray tell, what evidence would that be?

Well, if I hit somebody on the head it tends to impact their conscious processing. Similarly, if I jam an electrode in somebody's visual nerve it tends to have a pretty direct effect on their qualia. And the various other kinds of brain damage to specific regions with repeatable effects on particular kinds of mental operations.

Then you don't know if it's happening or not. You're just guessing.

Even before we understood gravity we saw that objects fell. Knowing that something is happening is generally easier than knowing how, and usually predates it.

The strongest evidence is probably the way in which various forms of brain damage change aspects of personality, in a manner that would be very odd under a soul-radio model of the brain.

Evidence that it happens in the brain doesn't really make it much less mysterious though.

No, there's nothing odd about brain damage changing aspects of personality in the soul-radio model.

If I mess around with a radio, add in an extra subwoofer, change the EQ settings etc to make it sound completely different when it gets played, I haven't actually changed anything about the signal. If you've read The Origin of Consciousness in the Breakdown of the Bicameral Mind (great read, not so sure it is correct), Jaynes actually gives a really good explanation for what consciousness actually does - and what it does is substantially less than most people actually believe... but that explanation is an entire chapter so I won't post it here.

If messing around with the radio makes it output an entirely different program, one would suspect that it was actually generating, not receiving a signal. (Or changed which signal it receives. Brain damage tunes your body to a different soul/consciousness is an option.)

As far the popular view of consciousness as mostly providing a narrative/excuses for subconscious processes (of which Jaynes' feels like a variation, where the narrative historically wasn't conceptualized as "I" and didn't have to have a single narrator), I feel like that would only more strongly suggest that it is inherently embodied.

If messing around with the radio makes it output an entirely different program, one would suspect that it was actually generating, not receiving a signal.

What? I cannot understand the point you're making here. If I turn the volume up or down on a set of speakers, I do not in any way begin to suspect that the speakers are the source of the audio signal rather than receiving it. Similarly, I've had psychedelic trips that caused me to behave in extremely odd ways compared to normal - but there was still a solid continuity of consciousness the entire way through. The signal remained constant despite the radio acting in bizarre ways, and when that temporary shift was over the signal returned to normal so to speak.

As far the popular view of consciousness as mostly providing a narrative/excuses for subconscious processes (of which Jaynes' feels like a variation

That is most definitely not how I interpreted Jaynes' work on consciousness. Could you please provide a bit more elaboration on what you think his model of it actually is?

Yes, psychedelics are consistent with the soul-radio model. Dissociatives and deliriants seem a lot more like the sort of brain damage that's evidence against it. The different consciousness part was mostly a joke.

I was going off what I remembered of Scott's review. Rereading it now, my memory of it was wrong, but it seems not very relevant to this conversation. Quoting the review,

I think he is unaware of (or avoiding) what we would call “the hard problem of consciousness”, and focusing on consciousness entirely as a sort of “global workspace” where many parts of the mind come together and have access to one another. In his theory, that didn’t happen – the mental processing happened and announced itself to the human listener as a divine voice, without the human being aware of the intermediate steps. I can see how “consciousness” is one possible term for this area, if you didn’t mind confusing a lot of people. But seriously, just say “theory of mind”.

But this thread is entirely about the hard problem.

More comments

I was expecting this to be the obvious answer and I pondered pre-addressing it because it's such a common claim, but the problem with the "altering the brain alters experience therefore the brain is the seat of experience" is that it's not the only thing that does that. Queue the weird syndromes that go with losing limbs or all the new stuff that we've recently learned about the gastrointestinal system heavily influencing mood, or even merely all that goes with the rest of the nervous system.

I think there is a much better argument for the body as a whole being the seat of experience.

Now clearly some pretty important stuff happens in the brain, but like you say, it's mysterious and we don't really know what the deal is.

I think the soul-radio model can actually explain all this in ways that are about as parsimonious overall as the meat-computer model (they both have different massive problems really). But since the particular phenomenon of consciousness that we're talking about here is very much unexplained, there's really no way to tell which one is right, and it's likely neither are in the final analysis.

Qualia and consciousness (the other sense, not the awake or asleep sense) are made up and can be done away with.

If I say 'oh everyone has a soul and it's a marvellous important spiritual distinction that separates us from animals and rocks we tricked into thinking' people look askance. They ask where the soul is, what properties it might have, what would happen if we removed it from someone. I have to give evasive answers like 'we can't find the soul, it might not be material like literally every other property and object' and 'properties of the soul - uhhh... it lets you feel things'.

For all intents and purposes we might as well not have souls - the concept isn't useful. You can't do anything with the knowledge of souls.

But if you call it qualia, everyone just accepts it as valid! Qualia and souls are effectively the same idea. The whole notion of 'philosophical zombies' is a joke. If there's no way to objectively determine the difference between a philosophical zombie and a 'normal' person with a soul - sorry with qualia... then what's the point of the idea? They are both the same. Just remove the distinction, remove qualia and let's get on with our business. People can feel things like pleasure or pain, we can isolate how those things work and use them to get results. Heroin, anesthetics and so on all hit at those discrete, real concepts. There's no doubt about them. As you say, the capabilities of humans and machines are wildly different in the physical, actual world. But there's no need to make up further separating distinctions in some non-material world.

Qualia is totally unnecessary. How can anyone expect materialism grapple with a concept that isn't even real? And how can a soul appear when the human brain is basically a scaled up monkey brain with some bells and whistles?

/images/16798841750822687.webp

Qualia isn’t soul. It’s something experienced in the brain. It’s very real because every human experiences it.

I can observe my own qualia right now, thereby refuting this. And it doesn't require the existence of a soul or any strictly dualist framework (I'm a realist myself). I know that I'm experiencing this interaction. And you can't really explain why or how I feel like I am.

This is enough of a mystery that I don't feel you're equipped to make any of the inferences you're making given you can't explain it.

Of course I can't offer any evidence of this observation by construction. But I know I'm right insofar as it's not the sort of observation that can be falsified (c.f. Descartes).

And you can't really explain why or how I feel like I am.

You think that thinking is an example of qualia. So you think that if you are thinking then you have qualia.

Say I thought that qualia and thinking were themselves included in remsajev. That doesn't make remsajev real. Things don't become real just by defining it such that it includes other things. Qualia isn't real either. There's no mystery at all, not of remsajev or qualia.

Who is the „you” here.

Call this experience of reality "thinking" if you will. Insofar as you can't explain it it's really no different.

Refuge in semantics won't save you from making claims about things you don't understand being epistemologically indigent.

Explain it? It just is.

Why do positive and negative charges attract? They just do. There's nothing to understand or explain, it just is. I don't need to explain qualia because it's nonsense with zero value, except to philosophers who need some make-work.

it just is

Quite. But I'm not the one extrapolating that onto objects that share no relation to humans when we have no knowledge of how it works.

Somehow I feel like "we don't know anything about this" is not the position that requires substantiation.

The idea that qualia is "made up" or "not real" seems difficult to defend. You can verify for yourself right now as you're reading this that you are directly experiencing qualia at this very moment. It is not a speculative thing like a soul that may or may not exist. Qualia is the one thing you can be quite sure exists. "I think therefore I am."

Maybe qualia is not important or is not a useful distinction, but that's different from saying it's not real. And in practice most people seem to think that qualia is very important indeed, so you would need to do some serious heavy lifting to prove otherwise. For example, the moral difference between killing someone in a video game and killing someone in real life primarily comes down to the differences in the qualitative experiences the two acts produce; the video game death produces no negative qualia, the real death produces large amounts of negative qualia in the victim and their friends and relatives.

Where is the qualia? I am reading, my eyes are moving, information is being processed. All of those are real things. Existence is real. But where does qualia come in? If you use qualia to mean the 'experience of reading and thinking' then it has zero value. The experience of reading is inherent when you read. If you define qualia as having experiences, then why can't I define a soul as that which is necessary to have experiences? It's nonsense.

For example, the moral difference between killing someone in a video game and killing someone in real life primarily comes down to the differences in the qualitative experiences the two acts produce; the video game death produces no negative qualia

People in real life are not simulations running on a few hundred lines of code and some textures! There's a huge actual difference between a bandit in Skyrim and a bandit in the real world.

If you use qualia to mean the 'experience of reading and thinking' then it has zero value.

This seems like an admission that qualia in fact exist, which would refute your claim that it's "not real." Whether it has value is a different question.

The experience of reading is inherent when you read.

What is your evidence for this claim? If I ask a human to read and summarize some text, the human will have the experience of reading. If I ask Chat GPT to read and summarize some text, it's unclear whether it will have any experience at all, and I think most people assume it does not. A cleaner example: a human has the experience of adding numbers whereas a simple digital calculator does not.

People in real life are not simulations running on a few hundred lines of code and some textures!

If the video game NPC had the subjective experience of being shot and dying, it would be immoral to kill the NPC. The moral weight of killing the NPC does not depend on how many lines of code are involved, but rather whether qualia are involved. This refutes your claim that qualia has "zero value."

Maybe your interlocutor is a philosophical zombie.

Every time this subject comes up I wonder if zombies are the main life form in existence or if modernity has somehow created them. Who the hell comes up with the thought experiment of a “philosophical zombie” who is identical in every way but lacks consciousness and forgets to consider the possibility of p-zombie 2.0, one that seems identical until you ask them about consciousness?

And who but a zombie hears their interlocutor denying the existence of qualia and then tells them, “you lie!” without imagining that they might be accurately reporting on their own inner experience?

Qualia and souls are effectively the same idea.

They are not the same thing at all. Start here.

That link doesn't have meaning. They're just inventing nonsense based upon assumptions of ideas that don't exist. It has no relation to the real world, no potential uses and no falsification. This is just make-work for philosophers.

Would a brain made up of Chinese people acting as molecules have emotions? Providing they mapped out all the hormones and so on, of course. Emotions are real things that can be observed. They then take a step further into the feeling of emotions, as though that's separate from emotions themselves. That sense of the word 'experience' from their philosophical zombie idea doesn't work, it's not a real thing.

Would that woman who's read about red but not seen it truly understand what red is? They assume there is an 'experience' of seeing red inherent in the question. She simply hasn't seen red, she's read a lot of documents and knows a lot about red. There's no confusion here other than what confusion the philosophers bring with them.

Do you know what it feels like to feel pain?

Do you agree that when you touch a hot stove, you experience a feeling of pain which accompanies your other behavioral indicators of pain (saying “ow”, pulling your hand away, etc)?

If the answer is yes, then you understand what qualia are.

Your desire to dunk on philosophers is distracting you from the fact that this is a very simple concept that every person is intimately familiar with.

The vast majority of contemporary philosophers are materialists about qualia anyway, so I don’t know what you’re getting so worked up over.

I feel pain and irritation with this whole debate.

This is a very simple (and wrong) concept. When you feel pain, you are feeling pain. Not qualia! The feeling of pain is just pain. You can't have pain without a feeling of pain, they're one and the same.

This is a very simple (and wrong) concept. When you feel pain, you are feeling pain. Not qualia! The feeling of pain is just pain. You can't have pain without a feeling of pain, they're one and the same.

(Probably!) not true. Fish act as if they feel pain, but study of their neurology indicates they probably don't. Call them "p-fish-zombies".

Humans are humans. Machines are machines. Humans are not machines. Machines aren't human.

The only reason to grant personhood to machines is to assume that there is no such boundary. That we are no different to machines. There is no reason to believe this of course, since in the real world, humans and machines are wildly different both in the way that they are constituted and in their abilities. Notice the constant need to use hypotheticals.

I will offer myself as an example of someone who believes that humans are special and have value in a way that a machine can never have, but who also believes that there are other reasons to grant personhood to machines (or other entities such as alien life). I've already given one: we're basically forced, in a Molochian sense, to grant personhood to anyone or anything whose allyship is important enough. This is analogous to how one can be a nationalist, yet treat foreigners as persons for pragmatic reasons.

All that such a belief stems from, is a religious belief in materialism.

I would not conflate having a theory for how personhood is granted in practice, with a "religious" belief. I'm open to being wrong about this theory; it's falsifiable.

Of course here we're straying from the idea of personhood as some innate quality and into some arbitrary social category. As someone who likes natural law it irks me. But alright.

I'm ready to debate the pragmatic argument for giving machines personhood, that one is indeed not a religious debate. But I still come on the side of the Butlerian Djihad here. I think extending moral consituency to objects is a terrible thing to do and strictly bad for humans.

Consider how someone could be executed for destroying a machine that isn't alive, as that would be murder. Unless you can make a compelling argument that this is a required compromise for humanity to even survive (which I'm not convinced we have enough data to even speculate on), how could you allow such a thing to happen? It seems as abominable to me as doing so for killing a pet.

I am convinced not even the smartest dog is worth one human life. Am wholly ready to extend this reasoning to aliens. And I would like to see the argument you can even make for machines.

“ The phenomenon of consciousness in humans and some animals is completely explainable as an evolved behavior that helps organisms thrive in groups by being able to tell stories about themselves that other social creatures can understand, and that make the speaker look good.”

What other animal does this? Why do you think consciousness originates with speech?

I didn't mean to imply that it was language that caused consciousness. Dogs, for example, sometimes pretend to have been doing something else when they do something embarrassing, and there's no speech involved. It's more about communicating to other people (or dogs as the case may be) a plausible story that makes you look good.