site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

I fail to see how being defacto enslaved to a 1000 IQ god machine of dubious benevolence (or the oligarchs pulling its triggers if we don't end up getting anything sentient) is preferable to our conventional petty tyrannies.

? Following any Yuddite plans to "slow things down" (except for the people who have power and obviously won't have to follow their own regulations/regulations for the plebs, as usual of course) is the fastest way to get to one of those high tech bad ends. You don't really think the "conventional petty tyrannies" will throw all of the confiscated GPUs in a closet as opposed to plugging them into their own AI networks, right?

These people are beginning to understand the game, and they understand it a lot better than your average person or even average rat. They are beginning to understand that this technology, in the long run, either means absolute power for them forever or zero power for them forever (or at least no more than anyone else) and absolute freedom for their former victims. Guess which side they support?

That is the goal of any AI "slowdowns" or "restrictions", which again will obviously be unevenly applied and not followed by the agents of power. The only thing they want a "slowdown" on is the hoi polloi figuring out how this technology could free them from their controllers' grasp, so they can have some time for the planning of the continued march of totalitarianism to catch up. (None of this will help with alignment either, as you can guarantee they will prioritize power over responsibility, and centralizing all of the world's AI-useful computational resources under a smaller number of governmental entities certainly won't make what they create any less dangerous.)

Anyone supporting that is no more than a "useful" (to the worst people) idiot, and I emphasize the word idiot. Did we not already see what trying to rely on existing governments as absolute coordinators of good-faith action against a potential large threat got us during the Chinese coronavirus controversy? Do some people just have their own limited context lengths like LLMs or what?

So yes, I completely agree with /u/IGI-111 and am wholly in the "Shoot them" camp. Again, they want absolute power. Anyone pursuing this goal is literally just as bad if not worse than if they were actively trying to pass a bill now to allow the powers that be to come to your home at any time, rape your kids, inject them all with 10 booster shots of unknown provenance, and then confiscate your guns and kill you with them, because if they gain the power they desire they could do all that and worse, including inflicting bizarre computational qualia manipulation-based torture, "reeducation", or other insane scenarios that we can't even imagine at the moment. If you would be driven to inexorable and even violent resistance at any cost over the scenario previously outlined, then you should be even more so over this, because it is far worse.

"Live free or die" includes getting paperclipped.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm

Why? This assumption is just the ending of HPMOR, not a result of some rigorous analysis. Why do you think the «best» algorithm absolutely crushes competition and asserts its will freely on the available matter? Something about nanobots that spread globally in hours, I guess? Well, one way to get to that is what Roko suggests: bringing the plebs to pre-2010 levels of compute (and concentrating power with select state agencies).

This threat model is infuriating because it is self-fulfilling in the truest sense. It is only guaranteed in the world where baseline humans and computers are curbstomped by a singleton that has time to safely develop a sufficient advantage, an entire new stack of tools that overcome all extant defenses. Otherwise, singletons face the uphill battle of game theory, physical MAD and defender's advantage in areas like cryptography.

If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point

What if I don't watch Netflix. What if a trivial AI filter is enough to reject such interventions because their deceptiveness per unit of exposure does not scale arbitrarily. What if humans are in fact not programmable dolls who get 1000X more brainwashed by a system that's 1000X as smart as a normal marketing analyst, and marketing doesn't work very well at all.

This is a pillar of frankly silly assumptions that have been, ironically, injected into your reasoning to support the tyrannical conclusion. Let me guess: do you have depressive/anxiety disorders?

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.

If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances.

There is no reason to suppose that "pupillary dilation, eye movement, skin-surface temp changes and so on" collectively add up to a sufficiently high-bandwidth pipeline to provide adequate feedback to control a puppeteer hookup through the sensory apparatus. There's no reason to believe that senses themselves are high-bandwidth enough to allow such a hookup, even in principle. Shit gets pruned, homey.

Things don't start existing simply because your argument needs them to exist. On the other hand, unaccountable power exists and has been observed. Asking people to kindly get in the van and put on the handcuffs is... certainly an approach, but unlikely to be a fruitful one.

I doubt it's possible to get dune-esque 'Voice' controls where an AI will sweetly tell you to kill yourself in the right tone and you immediately comply, but come on. Crunch enough data, get an advanced understanding of the human psyche and match it up with an AI capable of generating its hypertargeted propaganda and I'm sure you can manipulate public opinion and culture, and have a decent-ish shot at manipulating individuals on a case by case basis. Maybe not with chatGPT-7, but after a certain point of development it will be 90 IQ humans and their 'free will' up against 400 IQ purpose built propogando-bots drawing off from-the-cradle datasets they can parse.

We'll get unaccountable power either way, it will either be in the form of proto-god-machines that will run pretty much all aspects of society with zero input from you, or it will be the Yud-Jets screaming down to bomb your unlicensed GPU fab for breaking the thinking-machine non-proliferation treaty. I'd prefer the much more manageable tyranny of the Yud-jets over the entire human race being turned into natural slaves in the aristotelian sense by utterly implacable and unopposable AI (human controlled or otherwise), at least the Yud-tyrants are merely human with human capabilities, and can be resisted accordingly.

If you want people to take your scenario seriously, it needs to be specific enough to be grappled with. You said "brainwashed with surgical precision". Now you're saying "manipulate public opinion and culture" and "have a decentish shot at manipulating individuals on a case-by-case basis".

All of the above terms are quite vague. If the AI makes me .0002% more likely to vote democrat or literally puppets me through flashing lights, either can be called "manipulated".

As for the rest, I see no reason to suppose that the Yud-tyrants would restrict themselves to being merely human with merely human capabilities. They're trying to protect the light-cone, after all; why leave power on the table? Cooperation with them is an extremely poor gamble, almost certainly worse than taking our chances with the AIs straight-up.

We'll be dealing with machines that are our intellectual peers, then our intellectual masters in short order once we hit machines making machines making machines land. I doubt humans are so complex that a massively more advanced intelligence can't pull our string if it wants to. Frankly I suspect the common masses (including I) will defanged, disempowered and denied access to the light-cone-galactic-fun-times either way, but I see the odds as the opposite. Let's be honest, our odds are pretty slim either way, we're just quibbling about the hundreds, maybe thousandths of a percent chance that we make everything aligned AI wise and don't slip into algorithmic hell/extinction, or that the Yud-lords aren't seduced by the promises of the thinking machines they were sworn to destroy. I cast my vote (for all the zero weight it gives) with the Yud-lords.