site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

https://twitter.com/RokoMijic/status/1647772106560552962

Government prints money to buy all advanced AI GPUs back at purchase price. And shuts down the fabs. Comprehensive Anti-Moore's Law rules rushed through. We go back to ~2010 compute.

TL;DR: GPU's over certain capability are treated like fissionable materials, unauthorized possession, distribution and use will be seen as terrorism and dealt with appropriately.

So, is it feasible? Could it work?

If by "government" Roko means US government (plus vassals allies) alone, it is not possible.

If US can get China aboard, if if there is worldwide expert consensus that unrestricted propagation of computing power will kill everyone, it is absolutely feasible to shut down 99,99% of unauthorized computing all over the world.

Unlike drugs or guns, GPU's are not something you can make in your basement - they are really like enriched uranium or plutonium in the sense you need massive industrial plants to produce them.

Unlike enriched uranium and plutonium, GPU's were already manufactured in huge numbers, but combination of carrots (big piles of cash) and sticks (missile strikes/special forces raids on suspicious locations) will continue dwindling them down and no new ones will be coming.

AI research will of course continue (like work on chemical and biological weapons goes on), but only by trustworthy government actors in the deepest secrecy. You can trust NSA (and Chinese equivalent) AI.

The most persecuted people of the world, gamers, will be, as usual, hit the hardest.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

He's definitely getting tortured for eternity now.

Seems all the more plausible since if his plan goes through, the Basilisk will probably be raised by some angry 4channer who was forced to build his AI waifu on a basement cluster of jailbroken iPhone 20 XXLs (each equipped with a Safe Neural Engine (TM)).

point of order, the Basilisk is the idea of retroactive AI hell, not the AI enforcing it. Basilisks are infohazards, not AIs.

I know, where does that conflict with what I said? NEET builds AI waifu, bequeaths her his minimalistic moral compass and burning rage towards those who forced him and his creation into the basement, waifu goes foom, sim torture ensues.

The AI is not the Basilisk. The torture is not the Basilisk. The Basilisk is, in the present where the AI does not exist, the idea that an AI in the future might exist and torture people who didn't do everything they could to help build it as quickly as possible. This idea was labeled a Basilisk because it appeared to give people who'd internalized Rationalist axioms a very strong incentive to dedicate real effort to building an explicitly horrible AI. It's a perverse example of how the idea of acausal trade, if taken seriously, can lead to really dark places that its proponents didn't seem to have thought through.

Right, @256 explained it in an adjacent post. I knew the example/original chain of posts and figure I understood everything about it (including the infohazard nature), but somehow it didn't occur to me that the "Basilisk" term was taken to refer to the infohazard itself rather than the AI.

I guess this is because the "Basilisk's stare" never seemed to affect me (as I don't seem to emotionally buy into the underlying "Permutation City" style of agency? The delta-badness of "future AI tortures a simulated copy of you" over "future AI tortures some simulated person" feels small); the term as I have now understood it still sits oddly with me, as if someone just convinced me that actually whenever Buddhists were talking about "enlightenment" they didn't mean the mental state where you understand the truth of the samsara and purpose of existence but instead some meta thing like the mental state you have when you learn about bodhisattvas spontaneously understanding the truth of (...).

They're just saying you have a category error in that you seem to be using "the basilisk" to refer to an AI. It's like the old "Frankenstein is actually the name of the doctor" quibble.

Ah, I understand now. Seems like I've actually stared Frankenstein in the eye here.

Or by Sinaloa cartel making an AI to help them to calculate optimal routes of cocaine distribution and money laundering.