site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation.

Couldn't agree more. In addition to Yud's failure to communicate concisely and clearly, I feel like his specific arguments are poorly chosen. There are more convincing responses that can be given to common questions and objections.

Question: Why can't we just switch off the AI?

Yud's answer: It will come up with some sophisticated way to prevent this, like using zero-day exploits nobody knows about.

My answer: All we needed to do to stop Hitler was shoot him in the head. Easy as flipping a switch, basically. But tens of millions died in the process. All you really need to be dangerous and hard to kill is the ability to communicate and persuade, and a superhuman AI will be much better at this than Hitler.

Question: How will an AI kill all of humanity?

Yud's answer: Sophisticated nanobots.

My answer: Humans already pretty much have the technology to kill all humans, between nuclear and biological weapons. Even if we can perfectly align superhuman AIs, they will end up working for governments and militaries and enhancing those killing capacities even further. Killing all humans is pretty close to being a solved problem, and all that's missing is a malignant AI (or a malignant human controlling an aligned AI) to pull the trigger. Edit: Also it's probably not necessary to kill all humans, just kill most of us and collapse society to the point that the survivors don't pose a meaningful threat to the AI's goals.

Yeah, I feel like EY sometimes mixes up his "the AGI will be WAY SMARTER THAN US" message with the "AI CAN KILL US IN EXOTIC AND ESOTERIC WAYS WE CAN'T COMPREHEND" message.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

But the other side of it is that you should also make a point to show that the threshold for killing us all is not all that high, if you account for what humans are presently capable of.

So yes, the AGI may pull some GALAXY-BRAINED strat to kill us using speculative tech we don't understand.

But if it doesn't have to, then no need to go adding complexity to the argument. Maybe it just fools a nuclear-armed state into believing it is being attacked to kick off a nuclear exchange, then sends killbots after the survivors while it builds itself up to omnipotence. Maybe it just releases like six different deadly plagues at once.

So rather than saying "the AGI could do [galaxy brained strategy] which might trigger the audience' skepticism," just argue "the AGI could do [presently possible strategy] but could think of much deadlier things to do."

"How would it do this without humans noticing?"

"I've already argued that it is superhuman, so it is going to make it's actions hard to detect. If you don't believe that then we should revisit my arguments for why it will be superhuman."

Don't try to convince them of the ability to kill everyone and the AI being super-intelligent at the same time.

Take it step by step.

If you're arguing about why AI will kill us all, yes, you need to establish that it is indeed going to be superhuman and alien to us in a way that will be hard to predict.

I don't even think you need to do this. Even if the AI is merely as smart and charismatic as an exceptionally smart and charismatic human, and even if the AI is perfectly aligned, it's still a significant danger.

Imagine the following scenario:

  1. The AI is in the top 0.1% of human IQ.

  2. The AI is in the top 0.1% of human persuasion/charisma.

  3. The AI is perfectly aligned. It will do whatever its human "master" commands and will never do anything its human "master" wouldn't approve of.

  4. A tin-pot dictator such as Kim Jong Un can afford enough computing hardware to run around 1000 instances of this AI.

An army of 1000 genius-slaves who can work 24/7 is already an extremely dangerous thing. It's enough brain power for a nuclear weapons program. It's enough for a bioweapons program. It's enough to run a campaign of trickery, blackmail, and hacking to obtain state secrets and kompromat from foreign officials. It's probably enough to launch a cyberwarfare campaign that would take down global financial systems. Maybe not quite sufficient to end the human race, but sufficient to hold the world hostage and threaten catastrophic consequences.

Bioweapons, kompromat, and cyberwarfare are probably doable. Nukes require a lot of expensive physical infrastructure to build; that can be detected and compromised.

Perhaps the AI will become so charismatic that it could meme "LEGALIZE NUCLEAR BOMBS" into reality.