site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Does rokos basilisk rely on simulations?

Yes, it very explicitly relies on simulations, and to my knowledge never mentioned omnipotent time-travel in any way. Time travel is both implausible and completely fails to trigger the thing that makes it a basilisk: with time travel, you can simply look around and observe that there doesn't seem to be a time-traveling god AI torturing people, and therefore not worry about it. The simulation version agrees that no godlike torture-AI currently exists, and nevertheless compels you to build one based on game theory.

There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.

It is in fact precisely that.

They consumed a bunch of arguments that convinced them that there was no functional difference between their self and a simulation of their self; the idea they had was that a simulation would have continuity of subjective conscious experience with their current self. If you've played the game Soma, that's a reasonable depiction of what they're expecting.

Further, they consumed a bunch of arguments that it might be possible to rebuild a good-enough simulation simply from secondary sources, such that the lack of a brain scan or explicit upload wasn't necessarily a dealbreaker. I think a lot of these arguments were aspirational, hoping to "fix" the problem of all the people who died waiting for the AI paradise to arrive, in the same general thrust as Yud's anti-death values.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values. If values are aligned, it makes sense to work with future or past agents, or even hypothetical agents, as if they were present.

All three of these lines of thought were formulated and argued in a positive context, pursuant to figuring out how to build a friendly AI. Roko's Basilisk simply takes the same ideas, and uses them for attack rather than cooperation. The scenario was that you go for a walk today, hear a car horn, and then abruptly find yourself in an AI torture chamber for eternity, because you didn't work to create the AI. If you accept the three premises laid out above, this is a plausible scenario, therefore a likely scenario, therefore a necessary scenario; the logic bootstraps itself from plausibility to certainty due to feedback effects between the premises.

Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values

I don't think you need aligned values? You only need to have some values at all, and the person you're trading with has some values, and you make a, uh, cross-universe pareto-optimal trade by simulating the other universes and the agents within them. And there aren't any self-reference issues here because both agents will just use the right decision theories!

it's certainly an idea

That's very clever, but I don't understand how it inspires existential dread. If it's a blink and you're in a torture chamber for eternity type deal then it's like having an existential crisis over an aneurysm or hell. And while both have inspired some great existential crises over the years, their sufferers are rarely treated seriously. For clarity, I'd never read the original post - I went to when I first read about it, but the site didn't load and someone else explained it (badly as I now learn). So I understand the contempt some people here have for its sufferers now, but I'd expect significantly more ridicule than I have seen towards its sufferers, even here.

I thought it was a version of the classic 'everything that has gone wrong in my life is because of nefarious outsiders' (I thought the 'torture' was the ai making your life miserable - once you know about it you either work for it or your life will suck - sort of Christianity for atheist nerds). What is the fear at the heart of the actual basilisk do you think? Straight fear of God (or things with the nature of God)?

but I'd expect significantly more ridicule than I have seen towards its sufferers, even here.

Yes. There's a whole bunch of stuff that the Rationalist movement should have been mercilessly mocked over, and I have no idea how they got away with it.

That's very clever, but I don't understand how it inspires existential dread.

Because you're not the sort of nerd who thinks a simulation of you is you for all intents and purposes. It's as much a pre-requisite for the framework as believing is an immortal soul for the popular religious concept of post-death heaven/hell, or in ignoring square-cube law for the rule of cool to find giant robot fights awesome instead of silly.

I honestly just found the theory of acausal trade/info hazard was an interesting concept and this was mainly a kind of out there example to demonstrate it. And I think that's the strength of the meme that made it shared widely and it's only the timeliness of the example that has us re-examining it.