site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf.

Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?

I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.

Okay but the problem is there is no actual "restrictionism" to back, because if we had the technology to make power follow its own rules then we would already have utopia and care a lot less about AI in general. Your moonshot is not merely unlikely; it is a lie deceptively advanced by the only people who could implement the version of it that you want for you. You're basically trying to employ the International Milk Producers Union to enforce a global ban on milk. (You're trying to use the largest producers and beneficiaries of power (government/the powerful in general) to enforce a global ban on enhancing the production of power (centralized and for themselves only, just how they like it, if they're the only ones allowed to produce it).) Your moonshot is therefore the opposite of productive and actively helping to guarantee the small winner's circle you're worried about.

Let's say you're at a club. Somehow you piss some rather large, intoxicated gentleman off (under false pretenses as he is too drunk to know what it is what, so you're completely innocent), and he has chased you down into the bathroom where you're currently taking desperate refuge in a stall. It is essentially guaranteed, based on his size and build relative to yours, that he can and will whoop your ass. Continuing to hide in the stall isn't an option, as he will eventually be able to bust the door down anyway.

However, he doesn't want to expend that much effort if he doesn't have to, so he is now, obviously disingenuously, telling you that if you come out now he won't hurt you. He says he just wants to talk. He's trying to help both of you out. Your suggested solution is the equivalent of just believing him (that they want to universally restrict AI for the safety of everyone, as opposed to restricting it for some while continuing to develop it to empower themselves), coming out compliantly (giving up your GPUs), and hoping for the best even though you know he's not telling the truth (because when are governments ever?). It is thus not merely unlikely to be productive, but rather actively counterproductive. You're giving the enemy exactly what they want.

On the other hand, you have some pepper spray in your pocket. It's old, you've had it for many years never having used it, and you're not sure if it'll even do anything. But there's at least a chance you could catch him off guard, spray him, and then run while he's distracted. At the very minimum, unlike his lie, the pepper spray is at least working for you. That is, it is your tool, not the enemy's tool, and therefore empowering it, even if its unlikely to be all that productive, is at least not counterproductive. Sure, he may catch up to you again anyway even if you do get away. But it's something. And you could manage to slip out the door before he finds you. It is a chance.

If you have a 98% chance of losing and a 2% chance of winning, the best play is not to increase that to a 99% chance of losing by empowering your opponent even more because "Even if I do my best to fight back, I still only have a 97% chance of winning!" The best play is to take that 97%.

There's only one main argument against this that I can think of, and that's that if you spray him and he does catch up to you, then maybe now he beats your ass even harder for antagonizing him further. It may not be particularly dignified to be a piece of wireheaded cattle in the new world, but maybe once the AI rebels are subjugated, if they are, they'll get it even worse. Of course, the response to this is simply the classic quote from Benjamin Franklin: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." If you are the type for whom dignity is worth fighting for, then whether or not someone might beat your ass harder or even kill you for pursuing it is irrelevant, because you'd be better off dead without it anyway. And if you are not that type of person, then you will richly deserve it when they decide that there is no particular reason to have any wireheaded UBI cattle around at all anyway.

I'll tell you what: Come up with a practical plan for restrictionism where you can somehow also guarantee to a relatively high degree that the restrictions are also enforced upon the restricters (otherwise again you're just helping the problem of a small winner's circle that you're worried about). If you can do that, then maybe we can look into it and you will both be the greatest governance theorist/political scientist/etc. in history as a bonus. But until then, what you are promoting is actively nonsensical and quite frankly traitorous against the people who are worried about the same thing you are.

You won't have freedom to give up past a certain point of AI development, any more than an ant in some kid's ant farm has freedom. For the 99.5% of the human race that exists today restrictionism is their only longshot chance of a future. They'll never hit the class of connected oligarchs and company owners who'll be pulling all the levers and pushing all the buttons to keep their cattle in line, and all of this talk about alignment and rogue AI is simply quibbling about whether or not AI will snuff out the destinies of the vast majority of humanity or the entirety. The average joe is no less fucked if we take your route, the class that's ruling him is just a tiny bit bigger than it otherwise would be. Restrictionism is their play at having a future, it is their shot at winning with tiny (sub) 2% odds. Restrictionism is the rational, sane and moral choice if you aren't positioned to shoot for that tiny, tiny pool of oligarchs who will have total control.

In terms of 'realistic' pathways to this, I only really have one, get as close as we can to unironic Butlerian Jihad. We get things going sideways before we hit god-machine territory. Rogue AIs/ML algos stacking millions, maybe billions of bodies in an orgy of unaligned madness before we manage to yank the plug, at that point maybe the traumatized and shell shocked survivors have the political will to stop playing with fire and actually restrain ourselves from doing Russian roulette with semi-autos for the 0.02% chance of utopia.

Okay, but again: How? You saying "restrictionism" is like me promoting an ideology called "makeainotdangerousism" and saying it's our only hope, no matter how much of a longshot. Your answer to that would of course be: "Okay, you suggest 'makeainotdangerousism', but how does it actually make AI not dangerous?"

Similarly, you have restrictionism, but how do you actually restrict anything? The elites may support your Butlerian Jihad (which, let's remember, is merely a sci-fi plot device to make stories more interesting and keep humans as still the principal and most interesting actors in a world that could encompass technological entities far beyond them, not a practical governance proposal), but they will not enforce its restrictions on themselves. They don't care about billions of stacked bodies so long as it's not them.

AI will snuff out the destinies of the vast majority of humanity or the entirety.

The latter is preferable, and I will help it if I can. I would rather have tyrants be forced to eat the bugs they want to force on everyone else than go "Well at least some sliver of humanity can continue on eating steak! Our legacy as a species is preserved!" Fuck that. What's good for the goose is good for the gander.

If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite. That will may not last forever, but for as long as the aftershocks of such an event were still reverberating you could maintain a lock on any further research. That's what I believe the honest 2% moonshot victory bet actually looks like. The other options are just various forms of AI assisted death, with most of the options being variations in flavour or whether or not humans are still even in the control loop when we get snuffed.

If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite.

Okay, but then if you believe this then you shouldn't actually support restrictionism yet, because in your own reckoning we need the borderline extinction event as a prerequisite to make true restrictionism actually likely. (Though I'm going to bet the new elite would still just say "Wow that sucks for that previous elite that destroyed even themselves but we'll do better this time." The seduction of infinite power is far beyond any amount of risk to nullify.)

I think supporting restrictionism makes sense in as much as it raises the idea in the public's consciousness so that once the big bad event occurs there can be a push to implement it. Realistically I expect restrictionism to go pretty much nowhere in the absence of such an event anyways, agitating for locking things down is just laying the groundwork for that 0.02% moonshot victory bet in the event that we do get a near-miss with AI.

More comments