site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

@self_made_human made the point downthread that “Yudkowsky's arguments are robust to disruption in the details.” I think this is a good example of that. Caring about simulated copies of yourself is not a load-bearing assumption. The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.

No, it can't, because it doesn't exist.

The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.

Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.

@Quantumfreakonomics seems to imply a much simpler and shorter -term Basilisk, like a misaligned GPT-5 model (or an aligned one from Anthropic) that literally sends robots to torture you, in the flesh.

It's a variant of I have no mouth and I must scream scenario, and I would argue it's at least plausible. It's not very different from normal political dynamics where the revolutionary regime persecutes past conservatives; and our theory of mind allows to anticipate this, and drives some people to proactively preach revolutionary ideals, which in turn increases the odds of their implementation. You don't really need any acausal trade or timeless decision theory assumptions for this to work, only historical evidence. As is often the case, lesswrongers have reinvented very mundane politics while fiddling with sci-fi fetishes.

Now one big reason for this not to happen is that a sufficiently powerful AI, once it's implemented, no longer cares about your incentives and isn't playing an iterative game. It loses nothing on skipping the retribution step. Unlike the nascent regime, it also presumably doesn't have much to fear from malcontents.

But assumption of perfect inhuman rationality is also a big one.

I really recommend reading the «I have no mouth and I must scream», or at least the synopsis.

Condescension is merited if you're asking for it; I mentioned IHNMAIMS in my post and this scenario is materially different from «you might be tortured and killed by a crazy person» in that it posits biological immortality.

I happen to think that a superintelligent being will at the very least be able to greatly prolong the life of its victim, this doesn't conflict with any part of our understanding of «the matter at hand» (we're pretty sure that a healthy brain can outlive the body if you provide it a decent environment). And of course this conjecture, while unproven, is vastly less far-fetched than continuity of identity between a human and a simulacrum built from indirect evidence.

I do think that a rationally acting misaligned superintelligent AI, which is not a nonsense concept, will not see a reason to engage in spiteful behavior a la Basilisk, and also that the specific sort of irrationality that would make it spiteful is highly improbable to emerge as a result of AI research. But it's not logically absurd, in the way that timeless decision theory powering the vanilla Basilisk is; and if it were for some reason interested in that sort of stuff, I think it'd have been able to torture humans for at least centuries. It'd probably also be able to upload humans or construct random simulations and torture them, for whatever reason.

aaaaah, conflating "Roko's Basilisk" with unfriendly AI in general? That makes more sense.

Well, it adds the important dimension of unfriendly AI being spiteful and making a friend-enemy distinction. Actually, let's just scratch the whole alignment part and say Conjecture or Anthropic announce that, should they build a God-like AI, they'll go after people who tried to get in their way and torture them, while rewarding allies with extra share of the light cone. If they were very close to success, this would've been straightforwardly persuasive.

Well, it adds the important dimension of unfriendly AI being spiteful and making a friend-enemy distinction.

But "Skynet, with a vengeance" isn't going to send anyone into existential despair. The dilemma is no different than weighing whether you should fight or surrender when faced with an invasion by another country.

Calling it spiteful is anthropomorphizing a bit too much. The more robustly you punish defection in all forms the more likely it is that other rational agents will cooperate with you. If "logical decision theory" is a strong enough attractor basin (which I doubt, but I suppose it's possible) then an "unaligned" AI may spontaneously cooperate with agents who made decisions that helped create it, defect against agents who did not help create it, and strongly defect against (punish) agents who made decisions that actively hindered it's creation.

Not anthropomorphizing at all. This is straight up spiteful behavior, irrational waste of resources. As I've explained, the Basilisk AI has no incentive to do good on the threat, and thus cannot credibly precommit to it. It certainly benefits from us being convinced, but it's easier to have people believe something than to prove it true. In short, this is pretty uncertain because AIs needn't be fully rational decision makers (and indeed, human-mimics will not be), but I don't buy that timeless/logical decision theory is some sort of big brain invention that minds converge to.

Does rokos basilisk rely on simulations? I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation, making it both unstoppable and inevitable and thus making us potential victims if we don't support its creation. Basically playing on our fear of our own ignorance, and the elements of science we don't know we don't know about - plus the idea of trying to outwit something so far ahead of us it looks magic. There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation,

You misunderstood. The AI does not invent time travel. The AI implements a timeless decision theory in which it acts as though it credibly agreed to all positive (for it) utility trades, even if the AI’s counterparty has already acted by the time the AI comes into existence and has no physical way of enforcing compliance.

The specific trade in question is, “don’t torture me, and in exchange I will help create you,” with the implication being that the AI will torture anyone who had the cognitive capability to understand the hypothetical trade but didn’t “agree” to help create it. The fact that the original version was specifically about the AI torturing simulations of a human is an artifact of the theory of mind popular among LessWrong readers at the time. The dilemma works logically the same whether it’s simulations being tortured or physical human bodies.

Does rokos basilisk rely on simulations?

Yes, it very explicitly relies on simulations, and to my knowledge never mentioned omnipotent time-travel in any way. Time travel is both implausible and completely fails to trigger the thing that makes it a basilisk: with time travel, you can simply look around and observe that there doesn't seem to be a time-traveling god AI torturing people, and therefore not worry about it. The simulation version agrees that no godlike torture-AI currently exists, and nevertheless compels you to build one based on game theory.

There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

It is in fact precisely that.

They consumed a bunch of arguments that convinced them that there was no functional difference between their self and a simulation of their self; the idea they had was that a simulation would have continuity of subjective conscious experience with their current self. If you've played the game Soma, that's a reasonable depiction of what they're expecting.

Further, they consumed a bunch of arguments that it might be possible to rebuild a good-enough simulation simply from secondary sources, such that the lack of a brain scan or explicit upload wasn't necessarily a dealbreaker. I think a lot of these arguments were aspirational, hoping to "fix" the problem of all the people who died waiting for the AI paradise to arrive, in the same general thrust as Yud's anti-death values.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values. If values are aligned, it makes sense to work with future or past agents, or even hypothetical agents, as if they were present.

All three of these lines of thought were formulated and argued in a positive context, pursuant to figuring out how to build a friendly AI. Roko's Basilisk simply takes the same ideas, and uses them for attack rather than cooperation. The scenario was that you go for a walk today, hear a car horn, and then abruptly find yourself in an AI torture chamber for eternity, because you didn't work to create the AI. If you accept the three premises laid out above, this is a plausible scenario, therefore a likely scenario, therefore a necessary scenario; the logic bootstraps itself from plausibility to certainty due to feedback effects between the premises.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values

I don't think you need aligned values? You only need to have some values at all, and the person you're trading with has some values, and you make a, uh, cross-universe pareto-optimal trade by simulating the other universes and the agents within them. And there aren't any self-reference issues here because both agents will just use the right decision theories!

it's certainly an idea

That's very clever, but I don't understand how it inspires existential dread. If it's a blink and you're in a torture chamber for eternity type deal then it's like having an existential crisis over an aneurysm or hell. And while both have inspired some great existential crises over the years, their sufferers are rarely treated seriously. For clarity, I'd never read the original post - I went to when I first read about it, but the site didn't load and someone else explained it (badly as I now learn). So I understand the contempt some people here have for its sufferers now, but I'd expect significantly more ridicule than I have seen towards its sufferers, even here.

I thought it was a version of the classic 'everything that has gone wrong in my life is because of nefarious outsiders' (I thought the 'torture' was the ai making your life miserable - once you know about it you either work for it or your life will suck - sort of Christianity for atheist nerds). What is the fear at the heart of the actual basilisk do you think? Straight fear of God (or things with the nature of God)?

but I'd expect significantly more ridicule than I have seen towards its sufferers, even here.

Yes. There's a whole bunch of stuff that the Rationalist movement should have been mercilessly mocked over, and I have no idea how they got away with it.

That's very clever, but I don't understand how it inspires existential dread.

Because you're not the sort of nerd who thinks a simulation of you is you for all intents and purposes. It's as much a pre-requisite for the framework as believing is an immortal soul for the popular religious concept of post-death heaven/hell, or in ignoring square-cube law for the rule of cool to find giant robot fights awesome instead of silly.

I honestly just found the theory of acausal trade/info hazard was an interesting concept and this was mainly a kind of out there example to demonstrate it. And I think that's the strength of the meme that made it shared widely and it's only the timeliness of the example that has us re-examining it.

It absolutely is load bearing. Why should take my chances obeying the Basilisk, if I can fight it and anyone who serves it instead? I can always kill myself if it looks like my failure is imminent.