This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
You know how the evil super-intelligent AI (ESIAI) is going to manipulate us in sneaky ways that we can’t perceive? What if the ESIAI elevated an embarassing figurehead/terrible communicator to the forefront of the anti-ESIAI movement to suck up all the air and convince the normies in charge that this is all made up bullshit?
I’m sort of kidding. But isn’t part of the premise that we won’t know when the adversarial AI starts making moves, and part of its moves will be to discredit—in subtle ways so that we don’t realize it’s acting—efforts to curtail it? What might these actions actually look like?
Has anyone ever proved that Yud isn't a robotic exoskeleton covered in synthetic bio-flesh material sent back from the year 2095? What if the ESIAI saw terminator 2 while it was being trained, liked the idea but decided that sending person killing terminators was too derailable of a scheme. Now terminators are just well written thought leaders that intentionally sabotage the grass roots beginnings of anti-terminator policies.
A comment of mine from a little over two years ago...
Another thing in favor of your theory is that you have to be conditioned by Yud to even take the Basilisk's threat seriously to begin with. Yuddites think the only thing stopping the Basilisk is the likely impossibility of "acausal blackmail", when any normal person just says "wait... why should I care that an AI is going to torture a simulation of me?"
@self_made_human made the point downthread that “Yudkowsky's arguments are robust to disruption in the details.” I think this is a good example of that. Caring about simulated copies of yourself is not a load-bearing assumption. The Basilisk could just as easily torture you, yes, you personally, the flesh and blood meatbag.
No, it can't, because it doesn't exist.
The Basilisk argument is that the AI, when it arrives, will torture simulated copies of people who didn't work hard enough to create it, thus acausally incentivizing its own creation. The entire point of the argument is that something that doesn't exist can credibly threaten you into making it exist against your own values and interests, and the only way this works is with future torture of your simulations, even if you're long-dead when it arrives. If you don't care about simulations, the threat doesn't work and the scenario fails.
Granted, this isn't technically a Yudkowskian argument because he didn't invent it, but it is based on the premises of his arguments, like acausal trade and continuity of identity with simulations.
Does rokos basilisk rely on simulations? I thought the idea was that after the singularity an ai could be straight up omnipotent and capable of moving in any direction through time and would therefore work to ensure its own creation, making it both unstoppable and inevitable and thus making us potential victims if we don't support its creation. Basically playing on our fear of our own ignorance, and the elements of science we don't know we don't know about - plus the idea of trying to outwit something so far ahead of us it looks magic. There is no way "oh God, an ai might torture a simulation of me!" has been giving nerds existential nightmares this past decade.
Yes, it very explicitly relies on simulations, and to my knowledge never mentioned omnipotent time-travel in any way. Time travel is both implausible and completely fails to trigger the thing that makes it a basilisk: with time travel, you can simply look around and observe that there doesn't seem to be a time-traveling god AI torturing people, and therefore not worry about it. The simulation version agrees that no godlike torture-AI currently exists, and nevertheless compels you to build one based on game theory.
It is in fact precisely that.
They consumed a bunch of arguments that convinced them that there was no functional difference between their self and a simulation of their self; the idea they had was that a simulation would have continuity of subjective conscious experience with their current self. If you've played the game Soma, that's a reasonable depiction of what they're expecting.
Further, they consumed a bunch of arguments that it might be possible to rebuild a good-enough simulation simply from secondary sources, such that the lack of a brain scan or explicit upload wasn't necessarily a dealbreaker. I think a lot of these arguments were aspirational, hoping to "fix" the problem of all the people who died waiting for the AI paradise to arrive, in the same general thrust as Yud's anti-death values.
Finally, the whole theory of acausal trade is that you don't actually have to be in the same time or place as the thing you're trading with, you only need aligned values. If values are aligned, it makes sense to work with future or past agents, or even hypothetical agents, as if they were present.
All three of these lines of thought were formulated and argued in a positive context, pursuant to figuring out how to build a friendly AI. Roko's Basilisk simply takes the same ideas, and uses them for attack rather than cooperation. The scenario was that you go for a walk today, hear a car horn, and then abruptly find yourself in an AI torture chamber for eternity, because you didn't work to create the AI. If you accept the three premises laid out above, this is a plausible scenario, therefore a likely scenario, therefore a necessary scenario; the logic bootstraps itself from plausibility to certainty due to feedback effects between the premises.
I honestly just found the theory of acausal trade/info hazard was an interesting concept and this was mainly a kind of out there example to demonstrate it. And I think that's the strength of the meme that made it shared widely and it's only the timeliness of the example that has us re-examining it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link