site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

Especially given the pascal's wager type argument going on here. You don't even need to prove that AI will definitely kill all of humanity. You don't even need to prove that it's more likely than not. A 10% chance that 9 billion people die is comparable in magnitude to 900 million people dying (on the first order. the extinction of humanity as a species is additionally bad on top of that). You need to

1: Create a plausible picture for how/why AI going wrong might literally destroy all humans, and not just be racist or something.

2: Demonstrate that the probability of this happening is on the order of >1% rather than 0.000001% such that it's worth taking seriously.

3: Explain how these connect explicitly so people realize that the likelihood threshold for caring about it ought to be lower than most other problems.

Don't go trying to argue that AI will definitely kill all of humanity, even if you believe it, because that's a much harder position to argue and unnecessarily strong.

and not just be racist or something

Having read this, I think it's actually low-hanging fruit for the AI doomers. There are plenty of people very willing to accept that everything is already racist. It should be no problem to postulate that eHitler will use AI to kill all jews/blacks/gypsies/whoever. From there, it's a pretty short trip to eHitler losing control of his kill bots to hackers and we get WWIII where China, Russia, Venezuela, and every one of the 200+ ethnicities in Nigeria has their own kill bots aimed at some other fraction of humanity. The AI doesn't even have to be super-intelligent, it just has to be good at its job. Chuck Schumer could do this in one sentence, "What makes you think Trump wouldn't use AI to round up all the black, brown, and queer bodies?" Instant 100% Blue Tribe support for AI alignment (or, more likely, suppression).

The AI doesn't even have to be super-intelligent, it just has to be good at its job.

I think this is one of the creepiest possibilities - that no matter how hard well-aligned independent agentic AGI is, we have to make it soon, because we need something which can think intelligently enough about the A-Z of possible new technologies to say "you'll need defenses against X soon, so here's how we're building Y", independently enough to say "no I'm not going to tell you how Y works yet; that would just let a misanthrope figure out how to build X first", while being trustworthy enough that the result of building Y won't be "haha, that's what kills you all and gets you out of my way" ... and if we don't get all that, then as soon as it's easy enough for a misanthrope to apply narrow "this is how you win at Go" level technologies to "how do we win at designing a superplague" or whatever, we're over.

This is a great point. In some sense, this is the situation we had with the CDC. It was a trusted institution that was able to play around with gain-of-function because its reputation indicated that it would only ever use technology to fight disease, not win at superplauge war. It was limited to disease-type stuff, though, and the AI would presumably be able to predict and head off any kind of threat. Assuming, like you said, that we can trust it.

I think it makes "pausing" AI research impossible. There's no way to stop everyone from continuing the research. If the united West decides to pause, China will not, and it's not clear that the CCP is thinking about AI safety at all. The only real option is figuring out how to make a safe AI before someone else makes an unsafe AI.