site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

I think his problem isn't so much that he's bad at communicating his ideas, it's just that his ideas aren't that great in the first place. He's not a genius AI researcher, he's just a guy who wrote some meandering self-insert Harry Potter fan fiction and then some scifi doomsday scenarios about tiny robots turning us into goop. He can't make an argument without imagining a bunch of technologies that don't exist yet, may never exist and might not even be possible. And even if all of those things were true his solution is to nuke China if they build GPU factories which, even if it was a good plan (it isn't), he would never in a million years be able to convince anyone to do. I really can't understand the obsession with this guy.

Yudkowsky's arguments are robust to disruption in the details.

An ASI does not need dry nanotech to pose an existential risk to humanity, simple nukes and bioweapons more than suffice.

Not to mention that, as I replied to Dase above, just because he was wrong about the first AGI (LLMs) being utterly alien in terms of cognition, doesn't mean that they don't pose an existential risk themselves, be it from rogue simulacra or simply being in the hands of bad actors.

It would be insane to expect him to be 100% on the ball, and in the places where he was wrong in hindsight, the vast majority of others were too, and yet here we are with AGI incipient, and no clear idea of how to control it (though there are promising techniques).

That earns a fuck ton of respect in my books.

Respect is fine, but per the orthogonality thesis, respect for his predictive abilities shouldn't translate into agreement with his goals (and yet it does, because by something like a flipped version of Aaronson's "AI is the nerd being shoved into the locker" perspective, we are preinclined to think that the nerd is on our team).

That is not what the orthogonality hypothesis is about!

All it states is that almost any arbitrary level of intelligence can be paired with almost any goal or utility function, such that there's nothing stopping a super intelligence from wanting to make only paperclips.

Don't see it applying to how much respect I should have for Yud for one.

I think you may have misunderstood me; I explicitly said ("Respect is fine") that it doesn't apply to how much respect you should have, as long as respect does not entail a greater likelihood of following his suggestions. "Respect" is one of those words that are overloaded for reasons that I suspect involve enemy action: it is rational to "respect" authority in the sense of being aware that it can field many dudes with guns and acting in a way that will make it less likely you will end up facing the barrel of one, but authority would have an easier time if you "respected" it in the sense of doing what it wants even when there wasn't enough budget to send a dude with a gun to your house, and ideally just replaced your value function with authority's own.

I have little doubt that Eliezer is more intelligent and insightful than most of us here, but I don't believe that his value function is aligned with mine and don't have the impression that he considers truthfulness towards others to be a terminal value, so if anything his superior intelligence only makes it more likely that letting him persuade me of anything will lead me to act against my own interest.