This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Was a bit surprised to see this hadn't been posted yet, but yesterday Yudkowsky wrote an op-ed in TIME magazine where he describes the kind of regime that he believes would be necessary to throttle AI progress:
https://archive.is/A1u57
Some choice excerpts:
if its presence in the CW thread needs justifying, well, it's published in a major magazine and the kinds of policy proposals set forth would certainly ignite heated political debate were they ever to be seriously considered.
"Yudkowsky airstrike threshold" has already become a minor meme on rat and AI twitter.
Even if I thought there was a 99% chance of AI destroying humanity, creating a massive totalitarian world-state that tracks all private behavior and is willing to start nuclear wars to enforce its power doesn't seem like an improvement. What Yud is proposing is probably not possible, but if it is possible it's one of the worst futures I can imagine for the human race.
A "massive totalitarian world-state that tracks all private behavior" is not being called for here, nor is it likely. What's being proposed is much more like existing arms control, biosafety treaties between existing countries. Manufacturing and use of large-scale GPU clusters would be regulated in much the same way other large-scale industrial processes are regulated. "The evil fascist totalitarian state is coming for you" is a staple of the political imagination, but we're very far away from the nazis or soviets today, and this closes the gap by .00001%.
More options
Context Copy link
Would there be such a major change here? Anti-mass-destruction-proliferation sentiment becoming strong enough to prompt airstrikes on suspected violators is a thing that first happened more than 40 years ago ... prompting some outrage, but also creating strange bedfellows, even bringing some of the original lukewarm critics down the same rathole eventually.
Not even close. Anti-nuclear-proliferation military leadership could easily point to a history with nuked cities and much-much-bigger-nuke tests, and their actions were still on the continuum from "controversial" to "outright mistaken". Imagine how crazy they'd have looked in a world where nuclear energy had so far only ever given us power plants, where nobody had ever seen a nuclear warhead! But any unfriendly AGI with decent I isn't going to give us any early close calls to point to; there'd be no point in it doing anything destructive until success was certain.
More options
Context Copy link
No one is asking for that.
Pushing the SotA on LLMs takes millions of dollars. Pinpointing and regulating institutional actors who are doing training at that level is a lot easier than trying than trying to stop each and every individual from running LLaMA-7B in their basement at home. The latter is not feasible, which is why no one is asking for it.
More options
Context Copy link
We already threaten nuclear war if Russian troops invade Latvia, a country that was part of the Soviet Union a mere 32 years ago. We already invade countries that we suspect of violating nuclear non-proliferation (Iraq). The whole (non-cynical) point of the giant globe-spanning American military is to deter and destroy threats to the civilizational order.
I also think people are underestimating the willingness of rival powers to agree to this. You think Russia and China like their odds in an AI arms race? If USG made a credible offer of “AI non-proliferation,” how do we know it wouldn’t be accepted? “Maybe don’t build god,” is only considered an unreasonable proposition by SF techbros.
You might, but NATO states don't. Nuclear retaliation is reserved for nuclear attacks, while conventional forces for conventional attacks, the conflation of which ignores how MAD interacts, and doesn't interact, with conventional deterence.
The United States also does not invade countries known to be violating nuclear proliferation, and suspected of violating nuclear proliferation, and of course those actually having nuclear weapon. Because, again, MAD.
No, the point of the globe-spanning American military is to advance American security interests for the American alliance network. The Americans do not generally invade their allies, or their enemies when the cost to the Americans is too high, or when their domestic political order has higher priorities than destroying other people.
Them not liking their odds in an AI arms race is why they would offer a treaty, and cheat, for the same reason they (and most major powers) cheat to very degrees on other limitation agreements.
Cheating is the expectation in genuinely limiting international agreements, whether it be loophole abuse, redefinition of contested items, or blanket denial.
Because we know many of the American's own citizens don't find the US Government credible, let alone many other countries, and especially the geopolitical adversaries.
Because, of course, the US has many means to cheat, and would have an even greater incentive to cheat the more other parties genuinely gave up on a potential competitive advantage / competitive parity.
More options
Context Copy link
This was the canard that led to the invasion. In truth the intelligence would have been whatever was necessary to produce the desired outcome.
More options
Context Copy link
If the US is willing to cripple itself? You betcha. They'd agree and then go ahead and build their own AI.
More options
Context Copy link
Credible means enforceable. Isn't this Dark Forest situation where malicious signatories can continue to build AI in secret? If so, even non-malicious signatories will feel compelled to develop AI in secret to avoid getting shivved in the back.
It's also worth noting there is, basically, one other country in the world that can really present a challenge in the AI Arms Race, and that's China. Does anyone here think that the US is willing and able to conduct airstrikes on Chinese territory on the chance they're violating the AINPT (AI Non-Proliferation Treaty)? Because I don't.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Right. My problem with AI doomers is their inability to imagine an alternative I would actually prefer to paperclipping. If this is the solution to the problem, I'd rather the problem not be solved.
More options
Context Copy link
Devil’s advocate, but “tracks all private behavior” isn’t necessary; only controlling by the flow of high-tier computing resources.
I know that the ability—or will—of a state to stop at that point is rather suspect.
"Only" controlling the flow of high-tier computing resources is really hard, especially when that control needs to be exercised across the entire globe. And the basic elements (GPUs) presumably won't be outlawed, just large clusters. And it's not enough to shut down 99% of clusters, since even a few slipping through the cracks is still a major threat if Yud's argument is correct. Absent panopticon-like surveillance and control, how would this be even remotely feasible?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link