site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Good point about mesaoptimizers and the difference between evolution and gradient descent.

The onus is on those theorists to mechanistically define them and rigorously show that they exist."

Here's where I disagree. As someone once said, "he who rules is he who sets the null hypothesis". I claim that the onus is on AI researchers to show that their technology is safe. I don't have much faith in glib pronouncements that AI is totally understood and safe.

Nuclear power, on the other hand, is well understood, has bounded downside, and is a mature technology. It's not going to destroy the human race. We can disprove the FUD against it. But in 1945, I might have felt differently.

It's not impossible but very hard in practice to prove a negative. You know that anti-nuclear people also demand extremely strong, cost-prohibitive proofs of safety, which is why we're in this mess. Of course, they have other nefarious motives to suppress human flourishing, but so do AI alarmists.

More to the point: decades ago, Nick Bostrom has proposed a taxonomy of X-risks. Those risks should be rigorously compared, for we must hedge all of them somehow. Some of those risks seem highly likely to me, follow from our prior social failures and even particularities of the current trend, and are comparable to «total human death» in moral (if not «utilitarian») badness, so the argument about «risk from AI cannot be quantified» doesn't hold. Bostrom:

While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment.

If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

5.1 Resource depletion or ecological destruction

The natural resources needed to sustain a high-tech civilization are being used up. If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.)

5.2 Misguided world government or another static social equilibrium stops technological progress

One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type [50].

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us.

6.3 Repressive totalitarian global regime

Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain.

Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

It is counterproductive to focus only on the well-propagandized model of of AI takeover through FOOM, in the age where AI built on principles radically different from those preferred by FOOM-argument-inventors is undergoing its Cambrian explosion; and in doing so exacerbate those Crunch-type risks. It is unprincipled. Moreover, it's wishful thinking: if only we could guard our asses from this one threat model! Perhaps one type of risk is truly greater than another, in raw probability or expected negative value or both. But just rehashing thought experiments about Seed AI from the 90s won't suffice to prove that the orthodox AI risk is the greater evil.

Now Bostrom himself proposes building a 6.3 regime, and Eliezer helpfully paves the way to it through his alarmism about training of capable models. I say we should at least demand they spell out why the possibility of eternity under their benevolent yoke, or fizzling out due to squandering our chances to expand, is preferable to getting paperclipped.

Because for me it is not so clear-cut. And be aware that we can fizzle out. I've argued about this here. We evidently have more than one chance to build an «aligned» (or as I'd rather have it, no-alignment-needed) AGI. We don't have infinite time for globohomo committees to surmount their perverse incentives, discover the true name of God through the game of musical chairs at Davos and immanentize Dath Ilan before proceeding to build said AGI – nor, I'd say, very good odds at aligning those committees to play the game in our interest.