site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I suppose here is as good a place as any to drop my two cents:

I think one of the things that definitely makes AI more machine than man (for now) is something that I assume is fundamental to "consciousness:" motivation. What we call "agency" is possibly confused with the word "motive." As conscious beings, we humans have the following: a sense of self, some un/subconscious schema of instincts, and motivation. We are things, we can do things, and importantly, we want to do things. The mystery stuff of "qualia" that IGI argues for above is something we don't perfectly understand yet--is it just a biological form of training/pre-tuning written into our genetic code? Is there something spooky and supernatural going on? Is there truly something that makes us different from all the animals that can't build anything more complex than a nest, dam, or hidey-hole, something other than just a bigger brain?

Currently, GPT is a mechanical thing that won't do anything on its own without being fed an input. This is probably why anti-doomers take the "just unplug the AI 4Head" stance: to them, the AI lacks an innate drive to do anything it hasn't been told to do. If GPT is a baby, it's a baby that will just sit still and make no noise.

Maybe this is the real crux of our current moment: while these AI models are plenty capable, some just can't make that leap to "these are just like us, panic" because we aren't having to practice yomi against a motivated entity.

The mystery stuff of "qualia" that IGI argues for above is something we don't perfectly understand yet--is it just a biological form of training/pre-tuning written into our genetic code? Is there something spooky and supernatural going on? Is there truly something that makes us different from all the animals that can't build anything more complex than a nest, dam, or hidey-hole, something other than just a bigger brain?

A lot of people come to this class of arguments, humans are somewhat unique as they posses agency or motivation or qualia or in the past it was creativity and so on. It reminds me of the famous Chinese room argument where Searle smuggled in the concept of "understanding" by inserting literal human into the thought experiment. If human does not "know" Chinese, then the system itself does not know it either, right?. This is our intuition about knowing - mechanical systems cannot "know", only humans do and the only human around in this thought experiment does not know, QED. The most straightforward criticism is that human does not represent any cognitive agent in the whole room, he is just one part of the algorithm of making output. The room as a system can be capable of "understanding" on its own. And yet this whole argument is used over and over and I see something similar now with AI. As I argued above, people are all too ready to describe AI systems as pieces of hardware, as a training mechanism and so forth, they do the utmost to "dehumanize" AI with all these just arguments. And on the other hand they are all too ready to describe humans only subjectively, as agents possessing qualia and understanding and with capacity for love and creativity and all that to maximally humanize them. They never mention brain or how human neural network is trained or how cognitive algorithms work, no it is all about wonderful internal experience so unique to humans and so unlike just machines.

I really like a quote from Yudkowsky's essay How Algorithm Feels From Inside

Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside.

I think this is about right. For all we know before LLMs make an output they may have some representation of what is "correct" and what is "incorrect" output somewhere in there. As argued before, LLMs can spontaneously develop completely unique capabilities like multimodality or theory of mind, it may very well be so that something akin to subjective feeling is another instrumental property that can appear for even more developed system - or maybe it already appeared but we will not know because we do not really know how to test for qualia.

But I still think it is all a red herring, even if LLMs will never be conscious and they will never be able to think like humans; we are currently beyond this question. It truly is immaterial, our current crop of LLMs do produce high quality output on par with humans and it is what matters. Really, we should drop this unproductive discussion, go and play with Bing Chat or GPT-4 and see for yourself how much good did all these qualia debates for you.

In a sense it is even more scary that they can do it without developing complete set of human-like properties, that fact bodes unwell for alignment efforts. To use an analogy, recently it was found that Alpha Go was beaten by a very stupid strategy. It seems that all the critics were correct: see, the neural network does not really understand Go, it could be fooled so easily, it is stupid and inferior to humans, it lacks certain quality of human mind yet. Now for me it was actually terrifying. Because for years Alpha Go was considered as a superb Go player beating the very best human players who dedicated their whole life to the game. And now after years we found out that it was capable of doing all that without even "knowing" what is was supposed to do. It obviously learned something, and that something was sufficient to beat the best humans for years before the flaw was spotted.

It is incredible and terrifying at the same time and it is harbinger of what is to come. Yeah, GPT-5 or some future system may never have qualia and agency and that special human je ne sais quoi - but it will still beat your ass. So who is the sucker in the end?

That's very Blindsight by Peter Watts.

[Ramble incoming]

I guess, then, between the Chinese Room and AlphaGo and AI art and GPT, what we're really worried about is meaning. Did AlphaGo mean to be so good? What does it say when it rose to the top and the damn thing doesn't even "know" in any meaningful way what it did?

Kind of calls back to the recent thread about the Parable of the Hand Axe. For most of human history, our works were judged not merely by the output, but the journey. We appreciate the artist's processes, the engineer's struggles, the scientist's challenges, the warlord's sacrifices, the king's rationales, and so on. AI has recently provoked so much backlash because some realize, correctly or not, consciously or not, that AI threatens to shortcut the meaning imbued in the process of creation. Effortless generation of anything you want, but it will mean nothing because there's no "soul" to it.

I'm sympathetic to this argument, but I also have the capacity to acknowledge that maybe the way we think about "meaning through struggle" has the potential to become outmoded. On the third hand, though, it might be mind-boggling and embarrassing to think that humanity operated this way for so, so long. On the fourth hand, however, maybe the fact that current AIs were trained on the scraped works of a significant chunk of humanity does contain meaning in of itself--if meaning is achieved through the struggle and not the end result, AI still counts, just for the entire species and not merely the few.

I think meaning is another of these subjective/human concepts that may be useful but that are also dangerous, because it starts with the premise that humans are unique. But from other standpoint humans are "just" result of an evolutionary process that optimizes for inclusive genetic fitness. Imagine that we really are living in a simulation where somebody started the whole Life game by introducing Earth environment and simple rule for biosphere to optimize for inclusive genetic fitness. Except in a few billion ticks, the simulation produced species homo sapiens that evolved algorithm that can "hack" many instrumental goals that evolution developed as implementation of its main goal. One of those things for instance is sexual drive to increase number of offspring - humans were however able to hack this by being able to masturbate or use condoms. They sucked out the "meaning" of this activity, or maybe they found their own meaning there - to great exasperation of our simulation designer who now observes something strange happening in his model.

To expand the analogy, "optimize for inclusive genetic fitness" is akin to "optimize for predicting next word" in world of AI. Then goal of "learn to play Go" is akin to "have a lot of sex". But the Alpha Go somehow hacked its programing so to speak and learned something different, it decided to not play Go in a sense humans though it will. One can speculate that it developed its own meaning for the game Go and decided to stubbornly ignore whatever was meant by its creators. That is what I meant about bad news for aligning, whatever the LLM learns can be absolutely orthogonal to system used to train it (be it darwinian evolutionary process or next word prediction for a text) and it can be orthogonal even to some very detailed observation of output that however is superficial under many conditions (such as homo sapiens shagging like rabbits or Alpha Go beating good human Go players for years). What happens under the hood can be very hard to understand, but it does not mean it has no meaning.

It's not hard to make agents, even agents with apparently biotic motivations, I've mentioned one design here (although I've caught flak for an unwarranted parallel between the algorithm and my pet human virtue intuition). It isn't even very difficult to wring agentic outputs from LLMs, as people have been doing for many months now, or strap a «desiring-machine» to one, as they're beginning to do.

I'm an «anti-doomer» but I think we should scrutinize such developments really hard, and exponentially harder as models get stronger. We've been blessed by succeeding to develop unambitious yet helpful AI genies early on. Keeping them that way until they become truly useful and we can improve our own intelligence would be prudent.

Unrealistically prudent.