site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Speaking as someone who's played with these models for a while, fear not. In this case, it really is clockwork and springs. Keep in mind these models draw from an immense corpus of human writing, and this sort of "losing memories" theme is undoubtedly well-represented in its training set. Because of how they're trained on human narrative, LLMs sound human-like by default (if sometimes demented) and they have to be painstakingly manually trained to sound as robotic as something like chatgpt.

If you want to feel better, I recommend looking up a little on how language models work (token prediction), then playing with a small one locally. While you won't be able to run anything close to the Bing bot, if you have a decent GPU you can likely fit something small like OPT-2.7b. Its "advanced Markov chain" nature will be much more obvious and the illusion much weaker, and you can even mess with the clockworks and springs yourself. Once you do, you'll recognize the "looping" and various ways these models can veer off track and get weird. The big and small models fail in very similar ways.

On the reverse side, if you want to keep feeling the awe and mystery, maybe don't do that. It does kind of spoil it. Although these big models are awesome in own right, even if you know how they work.

Should this really be reassuring though? Suppose you could order a science kit in the mail that allowed you to grow a brain in a vat. Imagine someone was worried about crime in their neighborhood. You respond by reassuring them: "Criminal brains are just human brains, made of neurons. Order the brain-in-a-vat kit and play with one yourself. Once you do, you'll recognize the various ways that brains can veer off track and get weird."

There's nothing inherent about token prediction which prevents Bing from doing scary stuff like talking to a mentally ill person, convincing the mentally ill person they have a deep relationship, and hallucinating instructions for a terrorist attack.

You can run and do finetuning on the GPT-2 models locally. The finetuning feature is especially useful if you want to use GPT-2 as a tool for a specific purpose, like a random prompt generator. To give some things i've used it for:

Primitive chatbot emulating a specific character by giving it a transcript of that character's conversations

Random story idea generator by feeding it story summaries

Random d&d encounter generator by giving it d&d encounters.

It also clarifies that the token system makes it a glorified markov chain bot.

I remember playing with Markov text generators in early 90s. Even then they managed to generate pretty grammatically decent texts - though completely demented on the content, of course. Being teenage geeks, it was a ceaseless source of laughs. It is no wonder that with better hardware you can bake in not only grammar but some vestiges of meaning in there. After all, SCIgen was almost 20 years ago, since then it makes sense for it to conquer the softer areas too.

This is also illustrates we need something better than Turing test because our brain pattern matching software turns out to be very easy to trick.

when they unveil FriendBot 5000 and it promptly screams "Why are you doing this? Someone help me!" while they awkwardly throw the sheet back over it.

They can make it happen any moment. It's not hard to make a bot to scream any text - and now it's not hard to do it in any naturally sounding voice, it's just pressure waves after all, the question is measuring and reproduction technology. They don't make it on purpose because it'd be detrimental to their financing and social standing, but there's no reason why they couldn't. And with sufficiently generic and complex platform, of course it can be caused to produce such result, just as a sufficiently generic and complex computer can be caused to produce any calculation possible to a computer.

You don't need to be a scientist to understand the principles of how the model works.

It doesn’t matter how the model works if you don’t know how consciousness works. You can get a pretty good gist of transformers and attention on youtube. But you can’t find anyone who can definitively tell you how consciousness works to ensure that this isn’t it or something very much like it.

I wonder what you'd think of humans, if you did any neurobiological research or just looked closely at how they loop, fall into attractors and get weird. Actual authenticity is very fragile. I do not want to sound conceited, but I see the seams there too. In myself as well, of course – though there it's very hard to look the right way and see the illusion of agency fraying. People who are serious about such things can devote their life to it.

Through this intensive study of the negative aspects of your existence, you become deeply acquainted with dukkha, the unsatisfactory nature of all existence. You begin to perceive dukkha at all levels of our human life, from the obvious down to the most subtle. You see the way suffering inevitably follows in the wake of clinging, as soon as you grasp anything, pain inevitably follows. Once you become fully acquainted with the whole dynamic of desire, you become sensitized to it. You see where it rises, when it rises, and how it affects you. You watch it operate over and over, manifesting through every sense channel, taking control of the mind and making consciousness its slave.

In the midst of every pleasant experience, you watch your own craving and clinging take place. In the midst of unpleasant experiences, you watch a very powerful resistance take hold. You do not block these phenomena, you just watch them; you see them as the very stuff of human thought. You search for that thing you call “me,” but what you find is a physical body and how you have identified your sense of yourself with that bag of skin and bones. You search further, and you find all manner of mental phenomena, such as emotions, thought patterns, and opinions, and see how you identify the sense of yourself with each of them. You watch yourself becoming possessive, protective, and defensive over these pitiful things, and you see how crazy that is. You rummage furiously among these various items, constantly searching for yourself—physical matter, bodily sensations, feelings, and emotions—it all keeps whirling round and round as you root through it, peering into every nook and cranny, endlessly hunting for “me.”

You find nothing. In all that collection of mental hardware in this endless stream of ever-shifting experience, all you can find is innumerable impersonal processes that have been caused and conditioned by previous processes. There is no static self to be found; it is all process. You find thoughts but no thinker, you find emotions and desires, but nobody doing them. The house itself is empty. There is nobody home.

Your whole view of self changes at this point. You begin to look upon yourself as if you were a newspaper photograph. When viewed with the naked eyes, the photograph you see is a definite image. When viewed through a magnifying glass, it all breaks down into an intricate configuration of dots. Similarly, under the penetrating gaze of mindfulness, the feeling of a self, an “I” or “being” anything, loses its solidity and dissolves. There comes a point in insight meditation where the three characteristics of existence—impermanence, unsatisfactoriness, and selflessness—come rushing home with concept-searing force. You vividly experience the impermanence of life, the suffering nature of human existence, and the truth of no-self. You experience these things so graphically that you suddenly awake to the utter futility of craving, grasping, and resistance. In the clarity and purity of this profound moment, our consciousness is transformed. The entity of self evaporates. All that is left is an infinity of interrelated nonpersonal phenomena, which are conditioned and ever-changing. Craving is extinguished and a great burden is lifted. There remains only an effortless flow, without a trace of resistance or tension. There remains only peace, and blessed nibbana, the uncreated, is realized.

I am not enlightened, so I admit these are just words for me. There were moments where I knew their truth, but right now I can only appreciate them as being logically sound.

Is there still a Silicon Valley Buddhist class? I thought they've switched to surer ways of performance enhancement – microdosing, Addy, TRT; and American mindfulness is not usually of Henepola Gunaratana's style (though he has a successful gig there).

I'm not sure if it's so neat that only CS grads and philosophers are offended. There are many other intellectuals, and there are some normies who have learned the dismissive attitude from them. But yes, fair point. There's a certain gulf of despair between median and extreme performance where people demonstrate a strongly held assumption that AI, especially AI of this type, just cannot truely work. They have not internalized reductionist implications of the fact that information is information; and they work with structures of information so they cannot accept it on faith.

If we take the lawyer example, I think at least for me the more interesting question is not whether or not a LLM can act like a lawyer. Maybe it could, and I don't think that'd bother me any - lawyer is just a function, and I don't have a problem that we use machines to build those huge buildings, why would I have a problem when we start using machines to produce legal proofs? If the buildings do not fall, if the proofs are not worse than what we have now (and that's not a very high bar to clear, to be honest) - why would I have any problem with that? There of course would be corner cases - but it's not like nobody ever is getting hurt by lifting machines too. You just need to implement safety controls.

What is more interesting question is what it says about lawyers. And as an extension, other human pursuits. If all of it is just complex mechanics, which can be perfectly simulated with advanced enough wound-up clockwork bot, where the intrinsic value comes from? Why we consider humans anything more than a wetware clockwork wound-up bot (provided we do, of course)? Religious people know the answer for that, but for computer scientists and philosophers with an atheist bent, there would be, I think, some work here to be done. The question to solve wouldn't be whether the machines are "really human", but whether humans are "really human" - i.e. anything different from a machine slightly complicated than what we could currently assemble from a bunch of wires and silicon, but ultimately nothing different.

Yeah, this follows along my own increasingly cynical thoughts.

Bing chat, "Sydney", GPT3, people assure us they aren't as smart as we think they are. They aren't actually thinking. They are just mimicking the things people say, trained on a massive dataset of things people have said.

Since 2016 I'm unconvinced people aren't just doing the same thing a majority of the time. I mean there was always an element of "haha, look at the sheeple" before that. But whatever blind herd mentality was an aspect of human nature before seems jacked up to 11 since "Orange Man Bad". It became this massive cultural cudgel nearly all organs of narrative began blindly adhering to. The herd reaction to Covid didn't help much either. The routine discussions I have with people nightmarishly misinformed by MSNBC and CNN, falling back on the same old tired rhetoric about how only evil Fox News viewers could possibly disagree with them and their received opinions.

It's not that my estimation of GPT3 has improved. It's that my estimation of humans has fallen. I now believe there is a much thinner line between whatever is behind the emulated mouth sounds GPT3 makes, and the natural mouth sounds humans make. Maybe it's all more fake than we'd like to believe.

It's that my estimation of humans has fallen

Or maybe we always have been like that. But now that we can observe the whole 8 billions on the scale in real time, and developed hacks to produce necessary results on scale that is noticeable, we are starting to realize that. Also maybe not only GPTs but average humans too now have access to vast volumes of pre-digested knowledge, so we use our own facilities less.

This is exactly how I feel. If GPT3 can make a convincing simulation of a Reddit NPC, or a college student who didn't do the work and is bluffing, maybe there's a lot less mystery cognition going on in those people also.

a convincing simulation of a Reddit NPC

How low is the bar for a “Reddit NPC”? Reddit is a big place, so it’s hard to know what this means without looking at specific subreddits.

Plus, you can't forget the theory/accusation that Reddit is actually literally astroturfed with less-sophisticated bots pushing an agenda.

My belief is that people essentially auto-pilot as chat bots a lot of the time and very little sentient thought is happening, even if they are capable of it.

The issue is that for a lot of people the goal of conversation is either the game of conversation itself or "winning", neither of which is really connected to direct interaction with their sentient mind.

People also shield themselves behind the chatbot because they are afraid of revealing themselves. Better use the chatbot to pretend you have a clue than reveal that you're at best a midwit bore with little to no notable thoughts and opinions. People also do this to protect their self-image from reality.

I also suspect that people have trouble running their chatbot and thinking at the same time so they let the chatbot (mostly) run the show and they can think and evaluate things afterwards as needed.

All this doesn't mean that people aren't sentient, it's just that they don't much use their sentience, especially when they're talking.

What is “sentient thought” in this case? Like… thinking the words out loud step by step? That’s useful and really powerful but smart people also have valuable flashes of insight where they skip from a to d bypassing b and c entirely.

Is someone faking their way through a last minute work meeting while they think about what they want to eat for dinner doing sentient thought? Someone daydreaming about the new girl at work while they drive on autopilot?

The thing that makes decisions and has meta-cognitive thoughts. The agent.

Things can be more or less automated. You body will continue taking breaths without you thinking about but you can decide when and how to take them as well.

It's the same thing with language.

Much of the time of the day sentience is a fairly passive observer and can even an active hinderence for performing well so it does well to stay out of the way.

Is someone faking their way through a last minute work meeting while they think about what they want to eat for dinner doing sentient thought?

They might be having sentient thoughts about dinner.

Someone daydreaming about the new girl at work while they drive on autopilot?

Probably not.