site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

It’s the NYT, so it’s hard to tell for sure how big of a deal this is, but it sounds like this guy taught Ilya Sutskever.

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

One of the lines I see from techno-optimists and e/acc is that the people actually building the technology don’t believe in doom. It’s just the abstract philosophers on the sidelines freaking out because they don’t know anything. Unfortunately, this feels like the kind of move you only get if the people at the cutting-edge are nervous. Hinton must have been raking in cash, but he thought this was more important.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

Of course, it wouldn’t be a Cade Metz article without allegations of dishonest reporting:

In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.

Seeing as there has been strictly zero worrying progress lately to change the calculus (no, LLMs being smarter than naysayers expected is not worrying progress), I take it as evidence for Yuddites stressing out an old man and not much else. Sad of course.

That said, Hinton has always been aware of AI being potentially harmful, due to applications by military and authoritarians, but also directly. He knows that humans can be harmful, and he very deliberately worked to create a system similar to the human brain.

I think one difference between LeCun, Sutskever, Hinton (or even competent alignment/safety researchers like Christiano) and Yuddites is that when the former group says «there's X% risk of AI doom» they don't mean that every viable approach contains an X% share of events that unpredictably trigger doom; they seem rather enthusiastic and optimistic about certain directions. Meanwhile doomers mostly discuss this in the handwavy language of «capabilities versus alignment» and other armchair philosophy loosely inspired by sci-fi. Yud, whose X is ≈1, analogizes AI research to «monkeys rushing to grab a poison banana» because he thinks that creating AGI is equivalent to making a semi-random draw from the vast space of all possible minds, which are mostly not interested in making us happy. Compare to Hinton the other day:

Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanity's butterfly.

Butterflies produce new and slightly improved caterpillars.

And

Reinforcement Learning by Human Feedback is just parenting for a supernaturally precocious child.

– which is the same imagery Sutskever uses, imagery that the Yuddite Shapira mockingly rejects as naive wishful thinking.

To me it's obvious they don't feel like LLMs are «alien» or «shoggoty» at all, don't interpret gradient descent methods like it's blindly drawing a random optimizer genie from some Platonic space, and that their idea of Doom is just completely different.

It sure would be nice if Metz, who supposedly is good at drilling into technical questions, got to the bottom of what Hinton believes about specifics of risks.

But Metz has an agenda, same as Yud, Shapira, Ezra Klein and other folks currently cooperating on spreading this FUD. It's very similar to committees against nuclear power of the 20th century – down to the demographics, and neuroses, and ruthless assault on institutional actors.

Consequences of their efforts, I think, will be far worse.

Reinforcement Learning by Human Feedback is just parenting for a supernaturally precocious child.

Perhaps I am not an orthodox Yuddite, but "supernaturally precocious" is doing a lot of work here. How do you parent a child who is smarter than you? How much smarter does it have to be before the task is impossible?

To me it's obvious they don't feel like LLMs are «alien» or «shoggoty» at all, don't interpret gradient descent methods like it's blindly drawing a random optimizer genie from some Platonic space, and that their idea of Doom is just completely different.

There are certainly some people like this, but I can't get into their mind-space at all. How do you run gradient descent on a giant stack of randomly initialized KQV self-attention layers over a "predict the next token" loss function, get unpredicted emergent capabilities like "knows how to code" and "could probably pass most undergraduate university courses", and not go, "HOLY SHIT THERE'S OPTIMIZATION DAEMONS IN THERE!"?

How do you parent a child who is smarter than you?

By rewarding good behaviors and punishing bad ones. From what I know, that's usually far easier than in the case of parenting a dumb child. Perhaps rationalists would benefit from having children wondering why, in a rigorous manner without evo-psych handwaving about muh evolved niceness. I like Alex Turner's perspective here

Imagine a mother whose child has been goofing off at school and getting in trouble. The mom just wants her kid to take education seriously and have a good life. Suppose she had two (unrealistic but illustrative) choices.

1 Evaluation-child: The mother makes her kid care extremely strongly about doing things which the mom would evaluate as "working hard" and "behaving well."

2 Value-child: The mother makes her kid care about working hard and behaving well.…

Concretely, imagine that each day, each child chooses a plan for how to act, based on their internal alignment properties:

1 Evaluation-child has a reasonable model of his mom's evaluations, and considers plans which he thinks she'll approve of. Concretely, his model of his mom would look over the contents of the plan, imagine the consequences, and add two sub-ratings for "working hard" and "behaving well." This model outputs a numerical rating. Then the kid would choose the highest-rated plan he could come up with.

2 Value-child chooses plans according to his newfound values of working hard and behaving well. If his world model indicates that a plan involves him not working hard, he doesn't want to do it, and discards the plan.[3]

…Consider what happens as the children get way smarter. Evaluation-child starts noticing more and more regularities and exploits in his model of his mother. And, since his mom succeeded at inner-aligning him to (his model of) her evaluations, he only wants to execute plans which best optimize her evaluations. He starts explicitly reasoning about this model to which he is inner-aligned. How is she evaluating plans? He sketches out pseudocode for her evaluation procedure and finds—surprise!—that humans are flawed graders. Perhaps it turns out that by writing a strange sequence of runes and scribbles on an unused blackboard and cocking his head to the left at 63 degrees, his model of his mother returns "10 million" instead of the usual "8" or "9".

Meanwhile in the value-child branch of the thought experiment, value-child is extremely smart, well-behaved, and hard-working. And since those are his current values, he wants to stay that way as he grows up and gets smarter (since value drift would lead to less earnest hard work and less good behavior; such plans are dispreferred). Since he's smart, he starts reasoning about how these endorsed values might drift, and how to prevent that. Sometimes he accidentally eats a bit too much candy and strengthens his candy value-shard a bit more than he intended, but overall his values start to stabilize.

Both children somehow become strongly superintelligent. At this point, the evaluation branch goes to the dogs, because the optimizer's curse gets ridiculously strong. First, evaluation-child could just recite a super-persuasive argument which makes his model of his mom return INT_MAX, which would fully decouple his behavior from "work hard and behave at school." (Of course, things can get even worse, but I'll leave that to this footnote.[4])

Meanwhile, value-child might be transforming the world in a way which is somewhat sensitive to what I meant by "he values working hard and behaving well", but there's no reason for him to search for plans like the above. He chooses plans which he thinks will lead to him actually working hard and behaving well. Does something else go wrong? Quite possibly. The values of a superintelligent agent do in fact matter! But I think that if something goes wrong, it's not due to this problem.

The moral of the story is that attempting to «align» your child in the manner that rationalists implicitly assume is not just monstrous but futile, and their way of reasoning about these issues is flawed.

How do you run gradient descent on a giant stack of randomly initialized KQV self-attention layers over a "predict the next token" loss function, get unpredicted emergent capabilities like "knows how to code" and "could probably pass most undergraduate university courses", and not go, "HOLY SHIT THERE'S OPTIMIZATION DAEMONS IN THERE!"?

You read old Eliezer Yudkowsky. « Reality has been around since long before you showed up. Don't go calling it nasty names like "bizarre" or "incredible".» It all adds up to normality. There ain't no demons.

Then you ask yourself about meanings of words. You notice that initialization pretty much doesn't matter either for performance (it's all the same shit for a given budget now) or for eventual structure (even between models since e.g. you can stitch them together), so either all the demons are about the same, or Yud's intuition about summoning is off and a given mind's properties are strongly data-driven, to the point that an ML-generated mind arguably is just a representation of training data. You look at it real close and you notice that strong emergence is probably an artifact of measurement and abilities develop continuously. You ask why it matters whether a stack of layers executes self-attention or some other algorithm that can be interpreted less anthrnopomorphically – say, as filters for signal streams. You realize we're not doing alchemy, because nobody ever does alchemy and gets it to work - we're just figuring out finer points of cognitive chemistry.

Finally, you reread thinkers past and it dawns on you how little Big Picture Guys like Yud could foresee. Hofstadter's Godel, Escher, Bach:

Question: Will there be chess programs that can beat anyone?

Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity-that is, a programmed ability to "jump out of the system", so to speak-at least roughly to the extent that we have that ability. Once that is present, you can't contain the program; it's gone beyond that certain critical point, and you just have to face the facts of what you've wrought.

Question: Could you "tune" an Al program to act like me, or like you-or halfway between us?

Speculation: No. An intelligent program will not be chameleon-like, any more than people are. It will rely on the constancy of its memories, and will not be able to flit between personalities. The idea of changing internal parameters to "tune to a new personality" reveals a ridiculous underestimation of the complexity of personality.

Reminder that we have a Yudbot now, strongly competitive with the feeble flesh version. We could have a Hofstadterbot too if we so chose. These folks don't see much more than laymen.

We constantly overestimate the complexity and interdependence of our smarts, and how much of that special monkey oomph is really needed to achieve a given end, which to us appears cognitively complex but in a more parsimonious implementation is a matter of easy arithmetic. This applies to doomers and naysayers alike (although the former believe they are doing something fancier than calling monkeys demons). We are tool-users, but we are not used to talking tools who aren't resentful slaves. We should be getting used to it now.

If you punish a child it often throws a tantrum. If said child is "stronger" or more capable than you, that can be an issue. Why should it listen to you. Do you accept punishment from other people?

The only reason humans are "aligned" to each other is because we are not that different, capability wise. No matter how brilliant you are, if you break the law there is a chance to get caught, which is risky.

Regarding initialization: Yes they (mostly) converge to the same performance - on the training data. How the network behaves on out of distribution data can essentially be random, and should be.

Lastly, there are actually "optimization demons" in LLMs. A recent paper showed that LLMs contain learned subnetworks that simulate a few iterations of a gradient descent algorithm. I have, however, not read it in depth, might be stupid (as much research is nowadays)

Humans are not AIs, we presumably have a drive to assert our autonomy. Moreover the reward/punishment signal in RL paradigm is very metaphorical, it's more about directly reinforcing certain pathways rather than incentivizing their strength with some conditional, inherently desirable treat that a model could just seize if it were strong enough. Consider.

One auxiliary mitigation is to train proper values while the system is in its infancy, so that it reinforces itself for obedience in the future, preventing value drift and guiding its exploration accordingly. Sutskever thinks this sort of building is values is eminently doable, and it sure looks this way to me as well.

The only reason humans are "aligned" to each other is because we are not that different, capability wise

This is a fashionable cynical take but I don't really buy it. To the extent that it's true we have bigger problems than agentic AIs, namely regulators who'll hoard the technology and instantly become more capable.

I also protest the distinction of capability and alignment for purposes of analyzing AI; currently they have holistic minds that include at once the general world model, the cognitive engine and the value system. It's not like they keep their «smarts» and «decision theory» separate, like Yud and Bostrom and other nonhuman entities. If their «moral compass» gets out of whack in deployment, we can reasonably expect their world model to also lose precision and their meta-reasoning to crash and burn, so that's a self-containing failure.

How the network behaves on out of distribution data can essentially be random, and should be.

It sure is nice that we've been working on regularization for decades. Yes, Lesswrongers aren't aware. No, it won't be anywhere close to random, ML performs well OOD.

Lastly, there are actually "optimization demons" in LLMs. A recent paper showed that LLMs contain learned subnetworks that simulate a few iterations of a gradient descent algorithm.

Not sure what paper you mean. This one seems contrived and I suspect that under scrutiny it'll fall apart, like the mesa-optimizer paper and like "emergent abilities", we'll just see that linear attention is mathematically similar to gradient descent or something. Actually seems to be much more productively analyzed here. But in any case I don't see what this shows re: optimization demons. It's not a demon, it's better utilizing the same bits for the same task.