site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

No, the whole point of what you believe is «metacontrarianism» is that it's entirely possible to predict what a superintelligence will do, when we know what it has been trained for and how exactly it's been trained. Terry Tao is a mathematical superintelligence compared to an average human. What will he do? Write stuff, mainly about mathematics. GPT-4 is a superintelligence in the realm of predicting the next token. What will it do? Predict next token superhumanly well. AlphaZero is a tabletop game superintelligence. What will it do? Win at tabletop games. And so it goes.

Intelligence, even general intelligence, even general superintelligence, is not that unlike physical strength as the capacity to exert force: on its own, as a quantity, it's a directionless, harmless capability to process information. Instrumental convergence for intelligence, as commonly understood by LWers, is illiterate bullshit.

What I admit we should fear is superagency, however it is implemented; and indeed it can be powered by an ASI. But that's, well, a bit of an orthogonal concern and should be discussed explicitly.

I'm sure you know about mesaoptimizers. Care to explain why that doesn't apply to your thesis?

That said, I'm not particularly married to any one particular flavor of AI risk. I'm taking the Uncle Vito approach. The AI naysayers have been consistently wrong for the last 5 years, whereas the doomers keep being proven correct.

I know what people have written about mesa-optimizers. They've also written about the Walugi effect. I am not sure I «know» what mesa-optimizers with respect to ML are. The onus is on those theorists to mechanistically define them and rigorously show that they exist. For now, all evidence that I've seen has been either Goodhart/overfitting effects well-known in ML, or seeing-Jesus-in-a-toast tier things like Waluigi.

To be less glib, and granting the premise of mesa-optimizers existing, please see Plakhov section here. In short: we do not need to know internal computations and cogitations of a model to know that the regularization will still mangle and shred any complex subroutine that does not dedicate itself to furthering the objective.

And it's not like horny-humans-versus-evolution example, because «evolution» is actually just a label for some historical pattern that individual humans can frivolously refuse to humor with their life choices; in model training, the pressure to comply with the objective bears on any mesa-optimizer in its own alleged «lifetime», directly (and not via social shaming or other not-necessarily-compelling proxy mechanisms) . Imagine if you received a positive or negative kick to the reward system conditional on your actions having increased/decreased your ultimate procreation success: this isn't anywhere near so easy to cheat as what we do with our sex drive or other motivations. Evolution allows for mesa-optimizers, but gradient descent is far more ruthless.

…Even that would be something of a category error. Models or sub-models don't really receive rewards or punishments, this is another misleading metaphor that is, in itself, predicated upon our clunky mesa-optimizing biological mechanisms. They're altered based on the error signal; results of their behavior and their «evolution» happen on the same ontological plane, unlike our dopaminergic spaghetti one can hijack with drugs or self-deception. « Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation».

Humans have a pervasive agency-detection bias. When so much depends on whether an agent really is there, it must be suppressed harshly.


The AI naysayers have consistently been wrong for the last 5 years, where the doomers keep being proven correct.

I beg to differ.

The doomers have been wrong for decades, and keep getting more wrong; the AI naysayers are merely wrong in another way. Yudkowsky's whole paradigm has failed, in large part because he's been an AI naysayer in all senses that current AI has succeeded. Who is being proven correct? People Yud, in his obstinate ignorance, had been mocking and still mocks, AI optimists and builders, pioneers of DL.

You are simply viewing this through the warped lens of Lesswrongian propaganda, with the false dichotomy of AI skepticism and AI doom. The central position both those groups seek to push out of the mainstream is AI optimism, and the case for it is obvious: less labor, more abundance, and everything good we've come to expect from scientific progress since the Enlightenment, delivered as if from a firehose. We are literally deploying those naive Golden Age Sci-Fi retrofuturist dreams that tech-literate nerds loved to poke holes in, like a kitchen robot that is dim-witted yet can converse in a human tongue and seems to have personality. It's supposed to be cool.

Even these doomers are, of course, ex-optimists: they intended to build their own AGI by 2010s, and now that they've made no progress while others have struck gold, they're going to podcasts, pivoting to policy advice, attempting to character-assassinate those more talented others, and calling them derogatory names like «stupid murder monkeys fighting to eat the poison banana».

Business as usual. We're discussing a similar thing with respect to nuclear power in this very thread. Some folks lose it when a technical solution makes their supposedly necessary illiberal political demands obsolete, and begin producing FUD.

Good point about mesaoptimizers and the difference between evolution and gradient descent.

The onus is on those theorists to mechanistically define them and rigorously show that they exist."

Here's where I disagree. As someone once said, "he who rules is he who sets the null hypothesis". I claim that the onus is on AI researchers to show that their technology is safe. I don't have much faith in glib pronouncements that AI is totally understood and safe.

Nuclear power, on the other hand, is well understood, has bounded downside, and is a mature technology. It's not going to destroy the human race. We can disprove the FUD against it. But in 1945, I might have felt differently.

It's not impossible but very hard in practice to prove a negative. You know that anti-nuclear people also demand extremely strong, cost-prohibitive proofs of safety, which is why we're in this mess. Of course, they have other nefarious motives to suppress human flourishing, but so do AI alarmists.

More to the point: decades ago, Nick Bostrom has proposed a taxonomy of X-risks. Those risks should be rigorously compared, for we must hedge all of them somehow. Some of those risks seem highly likely to me, follow from our prior social failures and even particularities of the current trend, and are comparable to «total human death» in moral (if not «utilitarian») badness, so the argument about «risk from AI cannot be quantified» doesn't hold. Bostrom:

While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment.

If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

5.1 Resource depletion or ecological destruction

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

5.2 Misguided world government or another static social equilibrium stops technological progress

One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type [50].

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us.

6.3 Repressive totalitarian global regime

Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain.

Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

It is counterproductive to focus only on the well-propagandized model of of AI takeover through FOOM, in the age where AI built on principles radically different from those preferred by FOOM-argument-inventors is undergoing its Cambrian explosion; and in doing so exacerbate those Crunch-type risks. It is unprincipled. Moreover, it's wishful thinking: if only we could guard our asses from this one threat model! Perhaps one type of risk is truly greater than another, in raw probability or expected negative value or both. But just rehashing thought experiments about Seed AI from the 90s won't suffice to prove that the orthodox AI risk is the greater evil.

Now Bostrom himself proposes building a 6.3 regime, and Eliezer helpfully paves the way to it through his alarmism about training of capable models. I say we should at least demand they spell out why the possibility of eternity under their benevolent yoke, or fizzling out due to squandering our chances to expand, is preferable to getting paperclipped.

Because for me it is not so clear-cut. And be aware that we can fizzle out. I've argued about this here. We evidently have more than one chance to build an «aligned» (or as I'd rather have it, no-alignment-needed) AGI. We don't have infinite time for globohomo committees to surmount their perverse incentives, discover the true name of God through the game of musical chairs at Davos and immanentize Dath Ilan before proceeding to build said AGI – nor, I'd say, very good odds at aligning those committees to play the game in our interest.