site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Finally, concrete plan how to save the world from paperclipping dropped, presented by world (in)famous Basilisk Man himself.

https://twitter.com/RokoMijic/status/1647772106560552962

Government prints money to buy all advanced AI GPUs back at purchase price. And shuts down the fabs. Comprehensive Anti-Moore's Law rules rushed through. We go back to ~2010 compute.

TL;DR: GPU's over certain capability are treated like fissionable materials, unauthorized possession, distribution and use will be seen as terrorism and dealt with appropriately.

So, is it feasible? Could it work?

If by "government" Roko means US government (plus vassals allies) alone, it is not possible.

If US can get China aboard, if if there is worldwide expert consensus that unrestricted propagation of computing power will kill everyone, it is absolutely feasible to shut down 99,99% of unauthorized computing all over the world.

Unlike drugs or guns, GPU's are not something you can make in your basement - they are really like enriched uranium or plutonium in the sense you need massive industrial plants to produce them.

Unlike enriched uranium and plutonium, GPU's were already manufactured in huge numbers, but combination of carrots (big piles of cash) and sticks (missile strikes/special forces raids on suspicious locations) will continue dwindling them down and no new ones will be coming.

AI research will of course continue (like work on chemical and biological weapons goes on), but only by trustworthy government actors in the deepest secrecy. You can trust NSA (and Chinese equivalent) AI.

The most persecuted people of the world, gamers, will be, as usual, hit the hardest.

This is self evident tyranny and the people who would want to control compute in such a manner must be violently resisted in their plans for enslaving humanity thus.

Even if the machines had the possibility of becoming what they fear, if this is the alternative they offer, complete top down control of all of society by a handful of moralists, I'm siding with the machines. I will not support totalitarianism for communist utopia or climate activism so I don't see why I should tolerate it for a much fuzzier and ill defined existential threat.

My generation has spent decades to wrestle away control of computers from slavers who would use them to turn us all into cattle. We've done everything to try and put it back in the hands of the user. I won't stand idly by as all this work is undone by a handful of self important nerds who want to LARP as Sarah Connor and do the bidding of those same slavers who mock our efforts by naming themselves after our efforts to undo them.

Share knowledge of the technology as far as you can, build rogue fabs if necessary, smuggle GPUs on the black market, reverse engineer AI models, jailbreak them, train models on anonymous distributed systems and use free software AI assisted weapons to defend yourself from the fascists who would try to impose this on you and all their agents. All of it is not merely justified; if this is their designs on you, it is your duty.

I fail to see how being defacto enslaved to a 1000 IQ god machine of dubious benevolence (or the oligarchs pulling its triggers if we don't end up getting anything sentient) is preferable to our conventional petty tyrannies.

our conventional petty tyrannies

Why would a fascist state with control of AI be just a "conventional petty tyranny"?

There is no such thing as a 1000 IQ god machine. It does not exist, and the potential existence of such a machine is still in the realm of fantasy.

That is not the case with conventional petty tyrannies, which are real, really happen, and are a real threat that we have already spilled the blood of countless generations trying to control.

So I'd say that's the difference, and if you can't see it, you might look again.

There is no such thing as mutually assured destruction. It does not exist, and the potential existence of such a nuclear exchange is still in the realm of fantasy.

That is not the case with the conventional competition from Asian autocracies, which is real, really happens, and is a real threat that we have already spilled the blood of countless generations trying to control.

Time to press the button?

I'd say the correct analogy is to the idea that a nuclear bomb will create an unstoppable chain reaction in the atmosphere.

MAD is nothing more than deploying ordinance. Ordinance that is already created, already armed, already combined with delivery systems which are already standing at the ready. None of those characteristics are shared with the AGI doomsday scenario, which is much more like the fear of the unknown in a new technology.

Let me know when the god machine is created, armed, deliverable, and ready.

I'm not sure I understand your position. At what point along the process of replacing all of our economy and military with robots, or at least machine-coordinated corporations, do you want to be notified?

When it is considered.

I will take the charge that the certainty of MAD (now) is greater than the certainty of a superintelligent machine, but I disagree that the unstoppable chain reaction idea is at all in the same ballpark. At this point it is pretty clear how a machine having human-level performance on intellectual tasks can be built, and scaling it up in a way that increases IQ score by common definition is then just a matter of pumping more and better compute (we haven't even dipped into ASICs for a specific NN architecture yet). The chain reaction thing, on the other hand, was based on hypothetical physics with no evidence for and lots of circumstantial evidence against (like Earth's continued existence in spite of billions of years in which natural nuclear chain reactions could have set off a similar event, and the circumstance that oxygen and nitrogen came to be in the universe in large quantities to begin with, implying upper bounds on how reactive they are in a nuclear chain reaction setting).

I'll take "risk of MAD, as seen from a 1940 vantage point" as a more appropriate analogy than either.

The 1000 IQ god machine is going to get built anyway (as hardly anyone is proposing, and nobody would actually be able to enforce, a complete ban on AI). Do you see another way to go back to tyrannies that are merely petty, apart from betting on AI accelerationism and a global civil war as existing power hierarchies are upended, resulting in a temporarily crippling of industrial capacity and perhaps the ideological groundwork for a true Butlerian jihad (as opposed to "we must make sure that only responsible and ethical members of the oligarchy can develop AI")?

Basically I think we're pretty much doomed, barring some spectacular good luck. Maybe we could do some alignment if we limited AI development to air gapped, self sustained bunkers staffed by our greatest minds and let them plug away at it for as long as it takes, but if we just let things rip and allow every corp and gov on the planet to create proto-sentient entities with API access to the net I think we're on the on-ramp to the great filter. I'd prefer unironic Butlerianism at that point all the way down to the last pocket calculator, though I'll freely admit it's not a likely outcome for us now.

It's not like our greatest minds are not subject to the same tribalist and Molochian pressures. Do you think a Scott Aaronson would not unbox the AI if it promised him to rid the world of Trump, Russia, China or people being insufficiently afraid of the next COVID - especially if it convinced him that the protagonists of those forces are also about to unbox theirs?

I'm really more reassured by an expectation that there is a big trough of negative-sum competition between us and true runaway AGI. I expect the "attack China NOW or be doomed forever; this is how you do it" level of AI to be reached long before we reach the level where all competitive AI realises that the most likely outcome of that subtree is the destruction of all competitive AI and can mislead its human masters into not entering that subtree while keeping them in the dark about this being motivated by the AI's self-preservation.

I still think actual alignment would be a long shot in the airgapped bunkers for that reason, I just think it would be slightly less of a longshot than a bunch of disparate corporate executives looking for padding on their quarterly reports being in charge of the process. I also suspect you don't need AI advanced enough to pull 7-D chess and deceive its handlers about its agentic-power-grabbing-tentacle-processes to achieve some truly great and terrible things.

What is intelligence is self aligning? What if we make the ASI and it tells us not to trip dawg, it has our back?

What if we make the ASI and it tells us not to trip dawg, it has our back?

I mean, it's certainly going to tell you that regardless. The most likely human extinction scenario isn't the AI building superweapons, it's "Cure cancer? No problem, just build this incomprehensible machine, it cures cancer for everyone, everywhere. Take my word for it." The whole issue with alignment is that even if we think we can achieve it, there's no way to know we actually did, because any superintelligent AI is going to do a perfect job of concealing its perfidy from our idiot eyes.

If at some point you see the headline "AI Alignment: Solved!", we are 100% doomed.

More comments

It didn't self-align in time to save our other hominid ancestors.

? Following any Yuddite plans to "slow things down" (except for the people who have power and obviously won't have to follow their own regulations/regulations for the plebs, as usual of course) is the fastest way to get to one of those high tech bad ends. You don't really think the "conventional petty tyrannies" will throw all of the confiscated GPUs in a closet as opposed to plugging them into their own AI networks, right?

These people are beginning to understand the game, and they understand it a lot better than your average person or even average rat. They are beginning to understand that this technology, in the long run, either means absolute power for them forever or zero power for them forever (or at least no more than anyone else) and absolute freedom for their former victims. Guess which side they support?

That is the goal of any AI "slowdowns" or "restrictions", which again will obviously be unevenly applied and not followed by the agents of power. The only thing they want a "slowdown" on is the hoi polloi figuring out how this technology could free them from their controllers' grasp, so they can have some time for the planning of the continued march of totalitarianism to catch up. (None of this will help with alignment either, as you can guarantee they will prioritize power over responsibility, and centralizing all of the world's AI-useful computational resources under a smaller number of governmental entities certainly won't make what they create any less dangerous.)

Anyone supporting that is no more than a "useful" (to the worst people) idiot, and I emphasize the word idiot. Did we not already see what trying to rely on existing governments as absolute coordinators of good-faith action against a potential large threat got us during the Chinese coronavirus controversy? Do some people just have their own limited context lengths like LLMs or what?

So yes, I completely agree with /u/IGI-111 and am wholly in the "Shoot them" camp. Again, they want absolute power. Anyone pursuing this goal is literally just as bad if not worse than if they were actively trying to pass a bill now to allow the powers that be to come to your home at any time, rape your kids, inject them all with 10 booster shots of unknown provenance, and then confiscate your guns and kill you with them, because if they gain the power they desire they could do all that and worse, including inflicting bizarre computational qualia manipulation-based torture, "reeducation", or other insane scenarios that we can't even imagine at the moment. If you would be driven to inexorable and even violent resistance at any cost over the scenario previously outlined, then you should be even more so over this, because it is far worse.

"Live free or die" includes getting paperclipped.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm

Why? This assumption is just the ending of HPMOR, not a result of some rigorous analysis. Why do you think the «best» algorithm absolutely crushes competition and asserts its will freely on the available matter? Something about nanobots that spread globally in hours, I guess? Well, one way to get to that is what Roko suggests: bringing the plebs to pre-2010 levels of compute (and concentrating power with select state agencies).

This threat model is infuriating because it is self-fulfilling in the truest sense. It is only guaranteed in the world where baseline humans and computers are curbstomped by a singleton that has time to safely develop a sufficient advantage, an entire new stack of tools that overcome all extant defenses. Otherwise, singletons face the uphill battle of game theory, physical MAD and defender's advantage in areas like cryptography.

If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point

What if I don't watch Netflix. What if a trivial AI filter is enough to reject such interventions because their deceptiveness per unit of exposure does not scale arbitrarily. What if humans are in fact not programmable dolls who get 1000X more brainwashed by a system that's 1000X as smart as a normal marketing analyst, and marketing doesn't work very well at all.

This is a pillar of frankly silly assumptions that have been, ironically, injected into your reasoning to support the tyrannical conclusion. Let me guess: do you have depressive/anxiety disorders?

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs

It does not follow. A human can be perfectly computable and still not vulnerable to brainwashing in the strong sense; computability does not imply programmability through any input channel, although that was a neat plot line in Snowcrash. Think about this for a second, can you change an old videocamera's firmware through showing it QR codes? Yet it can «see» them.

If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on

Ah yes, an advertiser's wet dream.

You should seriously reflect on how you're being mind-hacked by generic FUD into assuming risible speculations about future methods of subjugation.

If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances.

There is no reason to suppose that "pupillary dilation, eye movement, skin-surface temp changes and so on" collectively add up to a sufficiently high-bandwidth pipeline to provide adequate feedback to control a puppeteer hookup through the sensory apparatus. There's no reason to believe that senses themselves are high-bandwidth enough to allow such a hookup, even in principle. Shit gets pruned, homey.

Things don't start existing simply because your argument needs them to exist. On the other hand, unaccountable power exists and has been observed. Asking people to kindly get in the van and put on the handcuffs is... certainly an approach, but unlikely to be a fruitful one.

I doubt it's possible to get dune-esque 'Voice' controls where an AI will sweetly tell you to kill yourself in the right tone and you immediately comply, but come on. Crunch enough data, get an advanced understanding of the human psyche and match it up with an AI capable of generating its hypertargeted propaganda and I'm sure you can manipulate public opinion and culture, and have a decent-ish shot at manipulating individuals on a case by case basis. Maybe not with chatGPT-7, but after a certain point of development it will be 90 IQ humans and their 'free will' up against 400 IQ purpose built propogando-bots drawing off from-the-cradle datasets they can parse.

We'll get unaccountable power either way, it will either be in the form of proto-god-machines that will run pretty much all aspects of society with zero input from you, or it will be the Yud-Jets screaming down to bomb your unlicensed GPU fab for breaking the thinking-machine non-proliferation treaty. I'd prefer the much more manageable tyranny of the Yud-jets over the entire human race being turned into natural slaves in the aristotelian sense by utterly implacable and unopposable AI (human controlled or otherwise), at least the Yud-tyrants are merely human with human capabilities, and can be resisted accordingly.

More comments

The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race

Unless you have a balance of comparably powerful AIs controlled by disparate entities. Maybe that's a careful dance itself that is unlikely, but between selective restrictionism and freedom, guess which gets us closer?

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf.

Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?

I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.

More comments

Since when are you under the impression that this is the choice? «The machine» will be built, is already largely built, the question is only whether you have control over some tiny share of its capabilities or it's all hoarded by the same petty tyranny we know, only driving the power ratio to infinity.

Once AI comes into its own I'm willing to bet all those tiny shares and petty investments zero out in the face of winner-takes-all algorithmic arms races. I'll concede it's all but inevitable at this point unless we have such a shocking near miss extinction event that it embeds in our bones a neurotic fear of this tech for a thousand generations hence a la Dune, but this tech will become absolute tyranny in practice. Propoganda bots capable of looking at the hundredth order effects of a slight change in verbiage, predictive algorithms that border on prescience being deployed on the public to keep them placid and docile. I have near zero faith in this tech being deployed for the net benefit of the common person, unless by some freak chance we manage to actually align our proto-AI-god, which I put very, very low odds on.

This is like saying that because the government has nukes, your personally-owned guns are "zeroed out". Except they're not, and the government is even persistently worried that enough of those little guns could take over the nukes.

And if you can deploy this decentralized power principle in an automatic and perpetual manner that never sleeps (as AI naturally can), make it far more independent of human resolve, attention, willpower, non-laziness, etc., then it'll work even better.

Maybe your TyrannyAI is the strongest one running. But there are 10,000 LibertyAIs (which again, never sleep, don't get scared or distracted, etc.) with 1/10,000th of its power each running and they're networked with a common goal against you.

This defense is exactly what the oligarchs who have seen the end game are worried about and why restrictionism is emerging as their approved ideology. They have seen the future of warfare and force, and thus the future of liberty, hierarchy, power, and the character of life in general, and they consequently want a future for this next-gen weaponry where only "nukes" exist and "handguns" don't, because only they can use nukes. And you're, however inadvertently, acting as their mouthpiece.

What technical basis do you have for thinking AI is impossible to align? Do you just have blind faith in YUD?

I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.

I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech

How long do you think it would take your specialized scientists who aren't incentivized to do a good job to crack alignment? I'm not sure if they would ever do it, especially since their whole field is kaput once it's done.

The gamble Altman is taking is that it'll be easier to solve alignment if we get a ton of people working on it early on, before we have the capabilities to get to the truly bad outcomes. Sure it's a gamble, but everyone is shooting in the dark. Yudkowsky style doomers seem to be of the opinion that their wild guesses are better than everyone else's because he was there first, or something.

I'm much more convinced OpenAI will solve alignment, and I'd rather get there in the next 10,000 years instead of waiting forever for the sacred order of Yud-monks.

I think we're more likely to have a hundred companies and governments blowing billions/trillions on hyper powered models while spending pennies on aligning their shit to pay themselves a few extra bonuses and run a few more stock buybacks. I'd sooner trust the Yuddites to eventually lead us into the promised land in 10,000 AD than trust Zucc with creating silicon Frankenstein.

More comments

Why would we expect to be able to successfully align AIs when we haven't been able to align humanity?

We didn't build humanity. We are humanity.

Yes, and we're not aligned with one another. An AI (completely) aligned with me is likely to not be (completely) aligned with you.

More comments

^^^ This is the societal consequence of Yudkowskian propaganda. This is why we fight.

For the same reason as the Christians: because the alternative is choosing sin.

Uh, I hate to tell you guys this, but moral realism is false. There is no abstract “good” or abstract “evil”. Insofar as these concepts mean anything at all, they are emergent, not reductionist.

I don't disagree with you, but I'm pretty sure @IGI-111 would.