site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation.

Couldn't agree more. In addition to Yud's failure to communicate concisely and clearly, I feel like his specific arguments are poorly chosen. There are more convincing responses that can be given to common questions and objections.

Question: Why can't we just switch off the AI?

Yud's answer: It will come up with some sophisticated way to prevent this, like using zero-day exploits nobody knows about.

My answer: All we needed to do to stop Hitler was shoot him in the head. Easy as flipping a switch, basically. But tens of millions died in the process. All you really need to be dangerous and hard to kill is the ability to communicate and persuade, and a superhuman AI will be much better at this than Hitler.

Question: How will an AI kill all of humanity?

Yud's answer: Sophisticated nanobots.

My answer: Humans already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger. Edit: Also it's probably not necessary to kill all humans, just kill most of us and collapse society to the point that the survivors don't pose a meaningful threat to the AI's goals.

Honestly, you could explain grey goo with history. That’s kind of how the Stuxnet virus actually worked. The computer told the machine components to do what they did as fast as possible and to disable their ability to shut down if they got damaged. So, they did.

Nano bots could work much the same way — they’re built to take apart matter and build something else with it. But if you don’t give it stopping points, there’s no reason it wouldn’t turn everything into whatever you wanted it to make — including you, who happens to be made of the right kinds of atoms.

The problem with the nanobot argument isn't that it's impossible. I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes. The problem with the argument is that there's no need to invoke nanobots to explain why super intelligent AI is dangerous. Some number of people will hear "nanobots" and think "sci-fi nonsense." Rather than try to change their minds, it's much easier to just talk about the many mundane and already-extant threats (like nukes, gain of function bioweapons, etc.) that a smart AI could make use of.

I'm convinced a sufficiently smart AI could build and deploy nanobots in the manner Yud proposes.

I'm not convinced that's possible. Specifically I suspect that if you build a nanobot that can self-replicate with high fidelity and store chemical energy internally, you will pretty quickly end up with biological life that can use the grey goo as food.

Biological life is already self-replicating nanotech, optimized by a billion years of gradient descent. An AI can almost certainly design something better for any particular niche, but probably not something that is simultaneously better in every niche.

Though note that "nanobots are not a viable route to exterminating humans" doesn't mean "exterminating humans is impossible". The good old "drop a sufficiently large rock on the earth" method would work

I don't think nanobots are same as biological life, therefore it's not extremely dangerous argument holds. Take just viruses that can kill a good chunk of the population (sure, limitations in terms of how they evolve but...now you can design them with your superintelligence), why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus. Nanobots presumably would be even more flexible.

Nanobots presumably would be even more flexible.

Why would we presume this? Self-replicating nanobots are operating under the constraint that they have to faithfully replicate themselves, so they need to contain all of the information required for their operation across all possible environments. Or at least they need to operate under that constraint if you want them to be useful nanobots. Biological life is under no such constraint. This is incidentally why industrial bioprocesses are so finicky: it's easy to insert a gene into an E. coli that makes it produce your substance of interest, but hard to ensure that none of the E. coli mutate to no longer produce your substance of interest, and promptly outcompete the ones doing useful work.

why not a virus that spreads to the entire population while laying dormant for years and then start killing, extremely light viruses that can spread airborne to the entire planet, plenty of creative ways to spread to everyone not even including the zombie virus

I don't think I count "machine that can replicate itself by using the machinery of the host" as "nanotech". I think that's just a normal virus. And yes, a sufficiently bad one of those could make human civilization no longer an active threat. "Spreads to the entire population while laying dormant for years [while not failing to infect some people due to immune system quirks or triggering early in some people]" is a much bigger ask than you think it is, but also you don't actually need that, observe that COVID was orders of magnitude off from the worst it could have been and despite that it was still a complete clusterfuck.

Although I think, in terms of effectiveness relative to difficulty, "sufficiently large rock or equivalent" is still wins over gray goo. Though there are also other obvious approaches like "take over twitter accounts of top leaders, trigger global war". Though probably it's really hard to just take over prominent twitter accounts.