site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Was a bit surprised to see this hadn't been posted yet, but yesterday Yudkowsky wrote an op-ed in TIME magazine where he describes the kind of regime that he believes would be necessary to throttle AI progress:

https://archive.is/A1u57

Some choice excerpts:

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

if its presence in the CW thread needs justifying, well, it's published in a major magazine and the kinds of policy proposals set forth would certainly ignite heated political debate were they ever to be seriously considered.

"Yudkowsky airstrike threshold" has already become a minor meme on rat and AI twitter.

Even if I thought there was a 99% chance of AI destroying humanity, creating a massive totalitarian world-state that tracks all private behavior and is willing to start nuclear wars to enforce its power doesn't seem like an improvement. What Yud is proposing is probably not possible, but if it is possible it's one of the worst futures I can imagine for the human race.

A "massive totalitarian world-state that tracks all private behavior" is not being called for here, nor is it likely. What's being proposed is much more like existing arms control, biosafety treaties between existing countries. Manufacturing and use of large-scale GPU clusters would be regulated in much the same way other large-scale industrial processes are regulated. "The evil fascist totalitarian state is coming for you" is a staple of the political imagination, but we're very far away from the nazis or soviets today, and this closes the gap by .00001%.

Would there be such a major change here? Anti-mass-destruction-proliferation sentiment becoming strong enough to prompt airstrikes on suspected violators is a thing that first happened more than 40 years ago ... prompting some outrage, but also creating strange bedfellows, even bringing some of the original lukewarm critics down the same rathole eventually.

probably not possible

Not even close. Anti-nuclear-proliferation military leadership could easily point to a history with nuked cities and much-much-bigger-nuke tests, and their actions were still on the continuum from "controversial" to "outright mistaken". Imagine how crazy they'd have looked in a world where nuclear energy had so far only ever given us power plants, where nobody had ever seen a nuclear warhead! But any unfriendly AGI with decent I isn't going to give us any early close calls to point to; there'd be no point in it doing anything destructive until success was certain.

creating a massive totalitarian world-state that tracks all private behavior and is willing to start nuclear wars to enforce its power doesn't seem like an improvement

No one is asking for that.

Pushing the SotA on LLMs takes millions of dollars. Pinpointing and regulating institutional actors who are doing training at that level is a lot easier than trying than trying to stop each and every individual from running LLaMA-7B in their basement at home. The latter is not feasible, which is why no one is asking for it.

We already threaten nuclear war if Russian troops invade Latvia, a country that was part of the Soviet Union a mere 32 years ago. We already invade countries that we suspect of violating nuclear non-proliferation (Iraq). The whole (non-cynical) point of the giant globe-spanning American military is to deter and destroy threats to the civilizational order.

I also think people are underestimating the willingness of rival powers to agree to this. You think Russia and China like their odds in an AI arms race? If USG made a credible offer of “AI non-proliferation,” how do we know it wouldn’t be accepted? “Maybe don’t build god,” is only considered an unreasonable proposition by SF techbros.

We already threaten nuclear war if Russian troops invade Latvia, a country that was part of the Soviet Union a mere 32 years ago.

You might, but NATO states don't. Nuclear retaliation is reserved for nuclear attacks, while conventional forces for conventional attacks, the conflation of which ignores how MAD interacts, and doesn't interact, with conventional deterence.

We already invade countries that we suspect of violating nuclear non-proliferation (Iraq).

The United States also does not invade countries known to be violating nuclear proliferation, and suspected of violating nuclear proliferation, and of course those actually having nuclear weapon. Because, again, MAD.

The whole (non-cynical) point of the giant globe-spanning American military is to deter and destroy threats to the civilizational order.

No, the point of the globe-spanning American military is to advance American security interests for the American alliance network. The Americans do not generally invade their allies, or their enemies when the cost to the Americans is too high, or when their domestic political order has higher priorities than destroying other people.

I also think people are underestimating the willingness of rival powers to agree to this. You think Russia and China like their odds in an AI arms race?

Them not liking their odds in an AI arms race is why they would offer a treaty, and cheat, for the same reason they (and most major powers) cheat to very degrees on other limitation agreements.

Cheating is the expectation in genuinely limiting international agreements, whether it be loophole abuse, redefinition of contested items, or blanket denial.

If USG made a credible offer of “AI non-proliferation,” how do we know it wouldn’t be accepted?

Because we know many of the American's own citizens don't find the US Government credible, let alone many other countries, and especially the geopolitical adversaries.

Because, of course, the US has many means to cheat, and would have an even greater incentive to cheat the more other parties genuinely gave up on a potential competitive advantage / competitive parity.

suspect of violating nuclear non-proliferation (Iraq)

This was the canard that led to the invasion. In truth the intelligence would have been whatever was necessary to produce the desired outcome.

You think Russia and China like their odds in an AI arms race?

If the US is willing to cripple itself? You betcha. They'd agree and then go ahead and build their own AI.

You think Russia and China like their odds in an AI arms race? If USG made a credible offer of “AI non-proliferation,” how do we know it wouldn’t be accepted?

Credible means enforceable. Isn't this Dark Forest situation where malicious signatories can continue to build AI in secret? If so, even non-malicious signatories will feel compelled to develop AI in secret to avoid getting shivved in the back.

It's also worth noting there is, basically, one other country in the world that can really present a challenge in the AI Arms Race, and that's China. Does anyone here think that the US is willing and able to conduct airstrikes on Chinese territory on the chance they're violating the AINPT (AI Non-Proliferation Treaty)? Because I don't.

Right. My problem with AI doomers is their inability to imagine an alternative I would actually prefer to paperclipping. If this is the solution to the problem, I'd rather the problem not be solved.

Devil’s advocate, but “tracks all private behavior” isn’t necessary; only controlling by the flow of high-tier computing resources.

I know that the ability—or will—of a state to stop at that point is rather suspect.

"Only" controlling the flow of high-tier computing resources is really hard, especially when that control needs to be exercised across the entire globe. And the basic elements (GPUs) presumably won't be outlawed, just large clusters. And it's not enough to shut down 99% of clusters, since even a few slipping through the cracks is still a major threat if Yud's argument is correct. Absent panopticon-like surveillance and control, how would this be even remotely feasible?