site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

More developments on the AI front:

Big Yud steps up his game, not to be outshined by the Basilisk Man.

Now, he officially calls for preemptive nuclear strike on suspicious unauthorized GPU clusters.

If we see AI threat as nuclear weapon threat, only worse, it is not unreasonable.

Remember when USSR planned nuclear strike on China to stop their great power ambitions (only to have the greatest humanitarian that ever lived, Richard Milhouse Nixon, to veto the proposal).

Such Quaker squeamishness will have no place in the future.

So, outlines of the Katechon World are taking shape. What it will look like?

It will look great.

You will live in your room, play original World of Warcraft and Grand Theft Auto: San Andreas on your PC, read your favorite blogs and debate intelligent design on your favorite message boards.

Then you will log on The Free Republic and call for more vigorous enhanced interrogation of terrorists caught with unauthorized GPU's.

When you bored in your room, you will have no choice than to go outside, meet people, admire things around you, make a picture of things that really impressed with your Kodak camera and when you are really bored, play Snake on your Nokia phone.

Yes, the best age in history, the noughties, will retvrn. For forever, protected by CoDominium of US and China.

edit: links again

Ok, let's say that Russia builds a large GPU cluster. Then the US and China have two options:

  1. Put up with it, in which case there is an unknown chance of a superhuman AI emerging and destroying humanity

  2. Nuke Russia, in which case there is a very high chance of a total nuclear war that kills hundreds of millions of people and devastates much of the world

Does Yudkowsky actually think that 2 is preferable?

If Russia invaded Alaska and said "if you shoot back at our soldiers we will launch nuclear weapons", letting them conquer Alaska would be better than a nuclear exchange. Nonetheless the U.S. considers "don't invade U.S. territory" a red line that they are willing to go to war with a nuclear power to protect. The proposal would be to establish the hypothetical anti-AI treaty as another important red line, hoping that the possibility of nuclear escalation remains in the background as a deterrent without ever manifesting. The risk from AI development doesn't have to be worse than nuclear war, it just has to be worse than the risk of setting an additional red line that might escalate to nuclear war. The real case against it is that superhuman AI is also a potentially beneficial technolgy (everyone on Earth is already facing death from old-age after all, not to mention non-AI existential risks), if it was purely destructive then aggressively pursuing an international agreement against developing it would make sense for even relatively low percentage risks.

When you say "the real case against it", are you merely noting an argument that exists, or are you making the argument i.e. saying in your own voice "banning AI is bad because AI could be good too"?

(In case of the latter: I know that The Precipice at least considers AI a bigger threat than literally everything else put together, at 1/10 AI doom and 1/6 total doom. I categorise things a bit differently than Ord does, but I'm in agreement on that point, and when looking at the three others that I consider plausibly within an OOM of AI (Life 2.0, irrecoverable dystopia, and unknown unknowns) it jumps out at me that I can't definitively state that having obedient superintelligences available would be on-net helpful with any of them. Life 2.0 would be exceptionally difficult to build without a superintelligence and could plausibly be much harder to defeat than to deploy. Most tangible proposals I've seen for irrecoverable dystopia depend on AI-based propaganda or policing. And unknown unknowns are unknowable.)

Both. Mostly I was contrasting to the obverse case against it, that risking nuclear escalation would be unthinkable even if it was a purely harmful doomsday device. If it was an atmosphere-ignition bomb being developed for deterrence purposes that people thought had a relevant chance of going off by accident during development (even if it was only a 1% risk), then aggressively demanding an international ban would be the obvious move even though it would carry some small risk of escalating to nuclear war. The common knowledge about the straightforward upside of such a ban would also make it much more politically viable, making it more worthwhile to pursue a ban rather than focusing on trying to prevent accidental ignition during development. Also, unlike ASI, developing the bomb would not help you prevent others from causing accidental or intentional atmospheric ignition.

That said, I do think that is the main reason that pursuing an AI ban would be bad even if it was politically possible. In terms of existential risk I have not read The Precipice and am certainly not any kind of expert, but I am dubious about the idea that delaying for decades or centuries attempting to preserve the unstable status-quo would decrease rather than increase long-term existential risk. The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction. An obvious risk for the initial collapse would be nuclear war, but could also be something more complicated like dysfunctional institutions failing to find alternatives to depleted phosphorous reserves before massive fertilizer shortages. Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are. Once humans are reduced to subsistence agriculture again the obvious candidate to take them the rest of the way would be climate shocks, as have greatly reduced the human population in the past.

Furthermore, I'm not that sympathetic to Total Utilitarianism as opposed to something like Average Preference Utilitarianism, I value the preferences of those who do or will exist but not purely hypothetical people who will never exist. If given a choice between saving someone's life and increasing the number of people who will be born by 2, I strongly favor the former because his desire to remain alive is real and their desire to be born is an imaginary feature of hypothetical people. But without sufficient medical development every one of those real people will soon die. Now, wiping out humanity is still worse than letting everyone die of old age, both because it means they die sooner and because most of those people have a preference that humanity continue existing. But I weigh that as the preferences of 8 billion people that humanity should continue, 8 billion people who also don't want to die themselves, not the preferences of 10^46 hypothetical people per century after galactic colonization (per Bostrom's Astronomical Waste) who want to be born.

The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction.

There's one way I could maybe see us having problems recreating some facet of modern tech. That is, indeed, a nuclear war, and the resulting radiation causing the most advanced computers to crash often (since modern RAM/registers operate on such exact precision that they can be bit-flipped by a single decay). Even then, though, there are ways and means of getting around that; they're just expensive.

Ord indeed takes an axe to the general version of this argument. Main points: 1) in many cases, resources are actually more accessible (e.g. open-cut mines, which will still be there even if you ignore them for 50 years, or a ruined city made substantially out of metal being a much easier source of metal than mankind's had since native copper was exhausted back in the Stone Age), 2) redeveloping technology is much easier than developing it for the first time, since you don't need the 1.0, least efficient version of the tech to be useful (e.g. the Newcomen atmospheric engine is hilariously inferior to what we could make with even similar-precision equipment). There are a whole pile of doomsday preppers who keep this sort of information in hardcopy in bunkers; we're not going to lose it. And, well, 1700s humanity (knocking us back further than that even temporarily would be extremely hard, because pre-industrial equipment is buildable by artisans) is still near-immune to natural X-risks; I'm less convinced that 1700s humanity would survive another Chicxulub than I am of modern humanity doing so, but that is the sort of thing it would take, and shocks that large are nailed down with low uncertainty at about 1/100,000,000 years.

If you really want to create a scenario where being knocked back a bit is a problem, I think the most plausible is something along the lines of "we release some horrible X-risk thing, then we go Mad Max, and that stops us from counteracting the X-risk thing". Global warming is not going to do that - sea levels will keep rising, of course, and the areas in which crops can be grown will change a little bit more, but none of that is too fast for civilisations to survive. (It's not like you're talking about 1692 Port Royal sinking into the ocean in a few minutes; you're talking about decades.) Most of the anthropogenic risks are pretty fast, so they're ruled out; we're dead or we're not. Life 2.0 is about the only one where I'd say "yeah, that's plausible"; that can have a long lead time.

Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are.

Dysgenics is real but not very fast, and it's only plausibly been operating for what, a century, and in only about half the world? This isn't going to be the end of the world. Flynn effect would be wiped out in apocalypse scenarios, of course, but we haven't eroded the baseline that much.

And to zoom out and talk about X-risk in fully-general terms, I'll say this: there are ways to mitigate it that don't involve opening the Pandora's Box of neural-net AGI. Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own). Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets (safer is not the same thing as safe, but fine, I agree, keeping AI suppressed on extremely-long timescales has issues). "We must do something" does not imply "we must do this".

Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).

Sure, but of course such measures being possible doesn't mean they'll actually be done.

Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets

This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.

A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.

Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.