site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

A comment of mine from a little over two years ago...

When I heard first heard about Roko's Basilisk (back when it was still reasonably fresh) I suggested, half seriously, that the reason Yudkowsky wanted to suppress this "dangerous idea" was that he was actually one of the Basilisk's agents.

Think about it, the first step to beating a basilisk, be it mythological or theoretical, is to recognize that it's a basilisk and thus that you have to handicap yourself to fight it. Concealing it's nature is the exact opposite of what you do if you're genuinely worried about a basilisk...

Another thing in favor of your theory is that you have to be conditioned by Yud to even take the Basilisk's threat seriously to begin with. Yuddites think the only thing stopping the Basilisk is the likely impossibility of "acausal blackmail", when any normal person just says "wait... why should I care that an AI is going to torture a simulation of me?"

@self_made_human made the point downthread that “Yudkowsky's arguments are robust to disruption in the details.” I think this is a good example of that. Caring about simulated copies of yourself is not a load-bearing assumption. The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

No, it can't, because it doesn't exist.

The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.

Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.

@Quantumfreakonomics seems to imply a much simpler and shorter -term Basilisk, like a misaligned GPT-5 model (or an aligned one from Anthropic) that literally sends robots to torture you, in the flesh.

It's a variant of I have no mouth and I must scream scenario, and I would argue it's at least plausible. It's not very different from normal political dynamics where the revolutionary regime persecutes past conservatives; and our theory of mind allows to anticipate this, and drives some people to proactively preach revolutionary ideals, which in turn increases the odds of their implementation. You don't really need any acausal trade or timeless decision theory assumptions for this to work, only historical evidence. As is often the case, lesswrongers have reinvented very mundane politics while fiddling with sci-fi fetishes.

Now one big reason for this not to happen is that a sufficiently powerful AI, once it's implemented, no longer cares about your incentives and isn't playing an iterative game. It loses nothing on skipping the retribution step. Unlike the nascent regime, it also presumably doesn't have much to fear from malcontents.

But assumption of perfect inhuman rationality is also a big one.

I really recommend reading the «I have no mouth and I must scream», or at least the synopsis.

Condescension is merited if you're asking for it; I mentioned IHNMAIMS in my post and this scenario is materially different from «you might be tortured and killed by a crazy person» in that it posits biological immortality.

I happen to think that a superintelligent being will at the very least be able to greatly prolong the life of its victim, this doesn't conflict with any part of our understanding of «the matter at hand» (we're pretty sure that a healthy brain can outlive the body if you provide it a decent environment). And of course this conjecture, while unproven, is vastly less far-fetched than continuity of identity between a human and a simulacrum built from indirect evidence.

More comments