site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation.

Couldn't agree more. In addition to Yud's failure to communicate concisely and clearly, I feel like his specific arguments are poorly chosen. There are more convincing responses that can be given to common questions and objections.

Question: Why can't we just switch off the AI?

Yud's answer: It will come up with some sophisticated way to prevent this, like using zero-day exploits nobody knows about.

My answer: All we needed to do to stop Hitler was shoot him in the head. Easy as flipping a switch, basically. But tens of millions died in the process. All you really need to be dangerous and hard to kill is the ability to communicate and persuade, and a superhuman AI will be much better at this than Hitler.

Question: How will an AI kill all of humanity?

Yud's answer: Sophisticated nanobots.

My answer: Humans already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger. Edit: Also it's probably not necessary to kill all humans, just kill most of us and collapse society to the point that the survivors don't pose a meaningful threat to the AI's goals.

Yeah, I feel like EY sometimes mixes up his "the AGI will be WAY SMARTER THAN US" message with the "AI CAN KILL US IN EXOTIC AND ESOTERIC WAYS WE CAN'T COMPREHEND" message.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

But the other side of it is that you should also make a point to show that the threshold for killing us all is not all that high, if you account for what humans are presently capable of.

So yes, the AGI may pull some GALAXY-BRAINED strat to kill us using speculative tech we don't understand.

But if it doesn't have to, then no need to go adding complexity to the argument. Maybe it just fools a nuclear-armed state into believing it is being attacked to kick off a nuclear exchange, then sends killbots after the survivors while it builds itself up to omnipotence. Maybe it just releases like six different deadly plagues at once.

So rather than saying "the AGI could do [galaxy brained strategy] which might trigger the audience' skepticism," just argue "the AGI could do [presently possible strategy] but could think of much deadlier things to do."

"How would it do this without humans noticing?"

"I've already argued that it is superhuman, so it is going to make it's actions hard to detect. If you don't believe that then we should revisit my arguments for why it will be superhuman."

Don't try to convince them of the ability to kill everyone and the AI being super-intelligent at the same time.

Take it step by step.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

I don't even think you need to do this. Even if the AI is merely as smart and charismatic as an exceptionally smart and charismatic human, and even if the AI is perfectly aligned, it's still a significant danger.

Imagine the following scenario:

  1. The AI is in the top 0.1% of human IQ.

  2. The AI is in the top 0.1% of human persuasion/charisma.

  3. The AI is perfectly aligned. It will do whatever its human "master" commands and will never do anything its human "master" wouldn't approve of.

  4. A tin-pot dictator such as Kim Jong Un can afford enough computing hardware to run around 1000 instances of this AI.

An army of 1000 genius-slaves who can work 24/7 is already an extremely dangerous thing. It's enough brain power for a nuclear weapons program. It's enough for a bioweapons program. It's enough to run a campaign of trickery, blackmail, and hacking to obtain state secrets and kompromat from foreign officials. It's probably enough to launch a cyberwarfare campaign that would take down global financial systems. Maybe not quite sufficient to end the human race, but sufficient to hold the world hostage and threaten catastrophic consequences.

Bioweapons, kompromat, and cyberwarfare are probably doable. Nukes require a lot of expensive physical infrastructure to build; that can be detected and compromised.

Perhaps the AI will become so charismatic that it could meme "LEGALIZE NUCLEAR BOMBS" into reality.

Feels almost like ingroup signaling. It's not enough to convince people that AI will simply destroy civilization and reduce humanity to roaming hunter-gatherer bands. He has to convince people that AI will kill every single human being on Earth in order to maintain his street cred.

Given a consequentialist theory like utilitarianism, there is also a huge asymmetry of importance between "AI kills almost all humans, the survivors persist for millions of years in caves" and "AI kills the last human."

Yep.

Although the thing that always makes me take AI risk a bit more seriously is the version where it doesn't kill all the humans, but instead creates a subtly but persistently unhappy world for them to inhabit and that gets locked in for eternity.

Oh yes, the vast majority of cases of unaligned AI kill us, but in those cases at least it will be quick. The "I have no mouth and I must scream" scenarios are more existentially frightening to me.

Why would you even need malignant AI or malignant human?

It's not hard to imagine realistic scenarios where AI enhanced military technology simply ends up falling down a local maximum slope that ends with major destruction (or what's effectively destruction from a bird's eye view). No need to come up with hyperbolic anthromorphised scenarios that read mostly like fiction.

I meant "malignant" in the same sense as "malignant tumor." Wasn't trying to imply any deeper value judgment.

Honestly, you could explain grey goo with history. That’s kind of how the Stuxnet virus actually worked. The computer told the machine components to do what they did as fast as possible and to disable their ability to shut down if they got damaged. So, they did.

Nano bots could work much the same way — they’re built to take apart matter and build something else with it. But if you don’t give it stopping points, there’s no reason it wouldn’t turn everything into whatever you wanted it to make — including you, who happens to be made of the right kinds of atoms.

The problem with the nanobot argument isn't that it's impossible. I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes. The problem with the argument is that there's no need to invoke nanobots to explain why super intelligent AI is dangerous. Some number of people will hear "nanobots" and think "sci-fi nonsense." Rather than try to change their minds, it's much easier to just talk about the many mundane and already-extant threats (like nukes, gain of function bioweapons, etc.) that a smart AI could make use of.

I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes.

I'm not convinced that's possible. Specifically I suspect that if you build a nanobot that can self-replicate with high fidelity and store chemical energy internally, you will pretty quickly end up with biological life that can use the grey goo as food.

Biological life is already self-replicating nanotech, optimized by a billion years of gradient descent. An AI can almost certainly design something better for any particular niche, but probably not something that is simultaneously better in every niche.

Though note that "nanobots are not a viable route to exterminating humans" doesn't mean "exterminating humans is impossible". The good old "drop a sufficiently large rock on the earth" method would work

I don't think nanobots are same as biological life, therefore it's not extremely dangerous argument holds. Take just viruses that can kill a good chunk of the population (sure, limitations in terms of how they evolve but...now you can design them with your superintelligence), why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus. Nanobots presumably would be even more flexible.

Nanobots presumably would be even more flexible.

Why would we presume this? Self-replicating nanobots are operating under the constraint that they have to faithfully replicate themselves, so they need to contain all of the information required for their operation across all possible environments. Or at least they need to operate under that constraint if you want them to be useful nanobots. Biological life is under no such constraint. This is incidentally why industrial bioprocesses are so finicky: it's easy to insert a gene into an E. coli that makes it produce your substance of interest, but hard to ensure that none of the E. coli mutate to no longer produce your substance of interest, and promptly outcompete the ones doing useful work.

why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus

I don't think I count "machine that can replicate itself by using the machinery of the host" as "nanotech". I think that's just a normal virus. And yes, a sufficiently bad one of those could make human civilization no longer an active threat. "Spreads to the entire population while laying dormant for years [while not failing to infect some people due to immune system quirks or triggering early in some people]" is a much bigger ask than you think it is, but also you don't actually need that, observe that COVID was orders of magnitude off from the worst it could have been and despite that it was still a complete clusterfuck.

Although I think, in terms of effectiveness relative to difficulty, "sufficiently large rock or equivalent" is still wins over gray goo. Though there are also other obvious approaches like "take over twitter accounts of top leaders, trigger global war". Though probably it's really hard to just take over prominent twitter accounts.

My answer: Human already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger.

Yeah, I'm not sure why the Skynet-like totally autonomous murder AI eats up so much of the discussion.

IIRC the original "Butlerian Jihad" concept was fear of how humans would use AI against other humans (the Star War against Omnius and an independent machine polity seems to be a Brian Herbert thing).

The idea of a Chinese-controlled AI incrementally improving murder capacities while working with the government seems like a much better tactical position from which to plant the seeds of AI fear from than using another speculative technology and what's widely considered a scifi trope to make the case.

China is already pretty far down the road of "can kill humanity" and people are already primed to be concerned about their tech. Much more grounded issue than nanomachines.

Yeah didn’t China already use technology to create a bio weapon that just recently devastated the globe? What’s to stop them from using AI to design another super virus and then WHOOPSIE super Covid is unleashed my bad

Huh, you could frame it as "here's a list of ways that existing state-level powers could already wreak havoc, now imagine they create an AI which just picks up where they left off and pushes things along further."

So the AI isn't a 'unique' threat to humanity, but rather the logical extension of existing threats.

Yeah, lots of veins to mine there.

You can talk about surveillance capitalism for the left-wingers, point out the potential for tyranny when the government doesn't even need to convince salaried hatchet-men to do its killing with autonomous tech to the Right...

Certain people - whether it's a result of bad math or the Cold War ending the way it did - really seem to react badly to "humanity is at threat". Maybe bringing it to a more relatable level will make it sink in for them.