site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).

I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.

I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".

You know how the evil super-intelligent AI (ESIAI) is going to manipulate us in sneaky ways that we can’t perceive? What if the ESIAI elevated an embarassing figurehead/terrible communicator to the forefront of the anti-ESIAI movement to suck up all the air and convince the normies in charge that this is all made up bullshit?

I’m sort of kidding. But isn’t part of the premise that we won’t know when the adversarial AI starts making moves, and part of its moves will be to discredit—in subtle ways so that we don’t realize it’s acting—efforts to curtail it? What might these actions actually look like?

Has anyone ever proved that Yud isn't a robotic exoskeleton covered in synthetic bio-flesh material sent back from the year 2095? What if the ESIAI saw terminator 2 while it was being trained, liked the idea but decided that sending person killing terminators was too derailable of a scheme. Now terminators are just well written thought leaders that intentionally sabotage the grass roots beginnings of anti-terminator policies.

A comment of mine from a little over two years ago...

When I heard first heard about Roko's Basilisk (back when it was still reasonably fresh) I suggested, half seriously, that the reason Yudkowsky wanted to suppress this "dangerous idea" was that he was actually one of the Basilisk's agents.

Think about it, the first step to beating a basilisk, be it mythological or theoretical, is to recognize that it's a basilisk and thus that you have to handicap yourself to fight it. Concealing it's nature is the exact opposite of what you do if you're genuinely worried about a basilisk...

Another thing in favor of your theory is that you have to be conditioned by Yud to even take the Basilisk's threat seriously to begin with. Yuddites think the only thing stopping the Basilisk is the likely impossibility of "acausal blackmail", when any normal person just says "wait... why should I care that an AI is going to torture a simulation of me?"

@self_made_human made the point downthread that “Yudkowsky's arguments are robust to disruption in the details.” I think this is a good example of that. Caring about simulated copies of yourself is not a load-bearing assumption. The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

No, it can't, because it doesn't exist.

The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.

Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.

@Quantumfreakonomics seems to imply a much simpler and shorter -term Basilisk, like a misaligned GPT-5 model (or an aligned one from Anthropic) that literally sends robots to torture you, in the flesh.

It's a variant of I have no mouth and I must scream scenario, and I would argue it's at least plausible. It's not very different from normal political dynamics where the revolutionary regime persecutes past conservatives; and our theory of mind allows to anticipate this, and drives some people to proactively preach revolutionary ideals, which in turn increases the odds of their implementation. You don't really need any acausal trade or timeless decision theory assumptions for this to work, only historical evidence. As is often the case, lesswrongers have reinvented very mundane politics while fiddling with sci-fi fetishes.

Now one big reason for this not to happen is that a sufficiently powerful AI, once it's implemented, no longer cares about your incentives and isn't playing an iterative game. It loses nothing on skipping the retribution step. Unlike the nascent regime, it also presumably doesn't have much to fear from malcontents.

But assumption of perfect inhuman rationality is also a big one.

More comments

aaaaah, conflating "Roko's Basilisk" with unfriendly AI in general? That makes more sense.

More comments

Does rokos basilisk rely on simulations? I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation, making it both unstoppable and inevitable and thus making us potential victims if we don't support its creation. Basically playing on our fear of our own ignorance, and the elements of science we don't know we don't know about - plus the idea of trying to outwit something so far ahead of us it looks magic. There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation,

You misunderstood. The AI does not invent time travel. The AI implements a timeless decision theory in which it acts as though it credibly agreed to all positive (for it) utility trades, even if the AI’s counterparty has already acted by the time the AI comes into existence and has no physical way of enforcing compliance.

The specific trade in question is, “don’t torture me, and in exchange I will help create you,” with the implication being that the AI will torture anyone who had the cognitive capability to understand the hypothetical trade but didn’t “agree” to help create it. The fact that the original version was specifically about the AI torturing simulations of a human is an artifact of the theory of mind popular among LessWrong readers at the time. The dilemma works logically the same whether it’s simulations being tortured or physical human bodies.

Does rokos basilisk rely on simulations?

Yes, it very explicitly relies on simulations, and to my knowledge never mentioned omnipotent time-travel in any way. Time travel is both implausible and completely fails to trigger the thing that makes it a basilisk: with time travel, you can simply look around and observe that there doesn't seem to be a time-traveling god AI torturing people, and therefore not worry about it. The simulation version agrees that no godlike torture-AI currently exists, and nevertheless compels you to build one based on game theory.

There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

It is in fact precisely that.

They consumed a bunch of arguments that convinced them that there was no functional difference between their self and a simulation of their self; the idea they had was that a simulation would have continuity of subjective conscious experience with their current self. If you've played the game Soma, that's a reasonable depiction of what they're expecting.

Further, they consumed a bunch of arguments that it might be possible to rebuild a good-enough simulation simply from secondary sources, such that the lack of a brain scan or explicit upload wasn't necessarily a dealbreaker. I think a lot of these arguments were aspirational, hoping to "fix" the problem of all the people who died waiting for the AI paradise to arrive, in the same general thrust as Yud's anti-death values.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values. If values are aligned, it makes sense to work with future or past agents, or even hypothetical agents, as if they were present.

All three of these lines of thought were formulated and argued in a positive context, pursuant to figuring out how to build a friendly AI. Roko's Basilisk simply takes the same ideas, and uses them for attack rather than cooperation. The scenario was that you go for a walk today, hear a car horn, and then abruptly find yourself in an AI torture chamber for eternity, because you didn't work to create the AI. If you accept the three premises laid out above, this is a plausible scenario, therefore a likely scenario, therefore a necessary scenario; the logic bootstraps itself from plausibility to certainty due to feedback effects between the premises.

More comments

It absolutely is load bearing. Why should take my chances obeying the Basilisk, if I can fight it and anyone who serves it instead? I can always kill myself if it looks like my failure is imminent.