site banner

Culture War Roundup for the week of May 22, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

This is a bizarre problem I’ve noticed with ChatGPT. It will literally just make up links and quotations sometimes. I will ask it for authoritative quotations from so and so regarding such topic, and a lot of the quotations would be made up. Maybe because I’m using the free version? But it shouldn’t be hard to force the AI to specifically only trawl through academic works, peer reviewed papers, etc.

It's not bizarre at all if you remember that ChatGPT has no inner qualia. It does not have any sort of sentience or real thought. It writes what it writes in an attempt to predict what you would like to read.

That is close enough to how people often think while communicating that it is very useful. But that does not mean that it somehow actually has some sort of higher order brain functions to tell it if it should lie or even if it is lying. All that it has are combinations of words that you like hearing and combinations of words that you don't, and it tries to figure them out based on the prompt.

It's not bizarre at all if you remember that ChatGPT has no inner qualia. It does not have any sort of sentience or real thought. It writes what it writes in an attempt to predict what you would like to read.

I don't think I disagree here, but I don't have a good grasp of what would be necessary to demonstrate qualia. What is it? What is missing? It's something, but I can't quite define it.

If you asked me a decade ago I'd have called out the Turing Test. In hindsight, that isn't as binary as we might have hoped. In the words of a park ranger describing the development of bear-proof trash cans, "there is a substantial overlap between the smartest bears and the dumbest humans." It seems GPT has reached the point where, in some contexts, in limited durations, it can seem to pass the test.

I don't have a good grasp of what would be necessary to demonstrate qualia

One key point in the definition of qualia is that there need not be any external factors that correspond to whether or not an entity possesses qualia. Hence the idea of a philosophical zombie: an entity that lacks consciousness/qualia, but acts just like any ordinary human, and cannot be distinguished as a P-zombie by an external observer. As such, the presence of qualia in an entity by definition cannot be demonstrated.

This line of thinking, originated in the parent post, seems to be misguided in a greater way. Whether or not you believe in the existence of qualia or consciousness, the important point is that there's no reason to believe that consciousness is necessarily tied to intelligence. A calculator might not have any internal sensation of color or sound, and yet it can perform division far faster than humans. Paraphrasing a half-remembered argument, this sort of "AI can't outperform humans at X because it's not conscious" talk is like saying "a forklift can't be stronger than a bodybuilder, because it isn't conscious!" First off, we can't demonstrate whether or not a forklift is conscious. And second, it doesn't matter. Solvitur levando.

One key point in the definition of qualia is that there need not be any external factors that correspond to whether or not an entity possesses qualia.

I disagree with this definition. If a phenomenon cannot be empirically observed, then it does not exist. If a universe where every human being is a philosophical zombie does not differ, then why not Occam's razor away the whole concept of a philosophical zombie?

I consider it much more reasonable to define consciousness and qualia by function. This eliminates philosophical black holes like the hard problem of consciousness or philosophical zombies. I doubt the concept of a philosophical zombie can survive contact with human empathy either. Humans empathize with video game characters, with simple animals, or even a rock with a smiley face painted on it. I suspect people would overwhelmingly consider an AI conscious if it emulates a human even on the basic level of a dating sim character.

deleted

Only on a narrow definition of ‘exist,’ and only if you exclude the empirical observation of your own qualia, which you’re observing right now as you read this.

I could be GPT-7, then by your definition I would not have qualia. Of course, I am a human and I have observed my qualia and decided that it does not exist on any higher level than my Minecraft house exists. Perhaps you could consider it an abstract object, but it is ultimately data interpreted by humans rather than a physical object that exists despite human interpretation.

It’s your world, man, and you’re denying it exists. Cogito ergo sum.

Your computer has an inner world. You can peek into it by going in spectator mode in a game or even the windows on your computer screen are objects in your computer's inner world. Of course, I would not argue that a computer is conscious, but that is because I think consciousness is a property of neural networks, natural or artificial.

Artificial neural networks appear analogous to natural ones. For example, they can break down visual data into its details similar to a human visual cortex. A powerful ANN trained to behave like a human would also have its inner world. It would claim to be conscious the same way you do and describe its qualia and experience. And these artificial consciousness and artificial qualia would exist at least on the level of data patterns. You might argue quasi-consciousness and quasi-qualia, but I would argue there is no difference.

My thesis: simulated consciousness is consciousness, and simulated qualia is qualia.

More precisely, qualia are synaptic patterns and associations in a artificial or natural neural network. Consciousness is the abstract process and functionality of an active neural network that is similar to human cognition. Consciousness is much harder to define precisely because people have not agreed whether animals are conscious or even whether hyper-cerebral psychopaths are conscious (if they really even exist outside fiction).

I do start doubting when I read about behaviorists who don’t believe qualia exist or are important, though.

I think qualia does not exist per se. However, I do think qualia is important on the level that it does exist. We have entered such a low level of metaphysics that it is difficult to put the ideas into words.

Although I’m not certain, I extend the same recognition of some kind of qualia to most animals because they are like us, and from a similar origin and evince similar behavior

With AI, though, this goes out the window: computers are not the same sort of thing as you and me or as animals, and thus I have no reason to suspect it will have the same sort of consciousness as I do. It’s a fundamentally different beast, not even a beast, but a machine.

But why make the distinction? If you recognize animals as conscious, I think if you spent three days with an android equipped with an ANN that perfectly mimicked human consciousness and emotion, then your lizard brain would inevitably recognize it as a fellow conscious being. And once your lizard brain accepts that the android is conscious, then your rational mind would begin to reconsider its beliefs as well.

Hence, I think the conception of a philosophical zombie cannot survive contact with an AI that behaves like a human. We can only discuss with this level of detachment because such an AI does not exist and thus cannot evoke our empathy.

Narrative memory, probably.

A graph of relations that includes cause-effect links, time, emotional connection (reward function for AI); which has the capacity to self update by both intention (reward function pings so negative on a particular node or edge that it gets nuked) and repetition (nodes/edges of specific connection combinations that consistently trigger rewards)

So voodoo basically

This shit still ocasionally falls apart on the highway after xty million generations of evolution for humans.