site banner

Culture War Roundup for the week of March 25, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Beijing Pushes for AI Regulation - A campaign to control generative AI raises questions about the future of the industry in China.

China’s internet regulator has announced a campaign to monitor and control generative artificial intelligence. The move comes amid a bout of online spring cleaning targeting content that the government dislikes, as well as Beijing forums with foreign experts on AI regulation. Chinese Premier Li Qiang has also carried out official inspection tours of AI firms and other technology businesses, while promising a looser regulatory regime that seems unlikely. [...]

One of the concerns is that generative AI could produce opinions that are unacceptable to the Chinese Communist Party (CCP), such as the Chinese chatbot that was pulled offline after it expressed its opposition to Russia’s war in Ukraine. However, Chinese internet regulation goes beyond the straightforwardly political. There are fears about scams and crime. There is also paternalistic control tied up in the CCP’s vision of society that doesn’t directly target political dissidence—for example, crackdowns on displaying so-called vulgar wealth. Chinese censors are always fighting to de-sexualize streaming content and launching campaigns against overenthusiastic sports fans or celebrity gossip. [...]

The new regulations are particularly concerned about scamming, a problem that has attracted much attention in China in the last two years, thanks to a rash of deepfake cases within China and the kidnapping of Chinese citizens to work in online scam centers in Southeast Asia. Like other buzzwordy tech trends, AI is full of grifting and spam, but scammers and fakes are already part of business in China.

/r/singularity has already suggested that any purported AI regulations coming from China are just a ruse to lull the US into a false sense of security, and that in reality China will continue pushing full steam ahead on AI research regardless of what they might say.

Anyway the main reason I'm posting this is to discuss the merits of the zero-regulation position on AI. I've yet to hear a convincing argument for why it's a good idea, and it puzzles me that so many people who allegedly assign a high likelihood to AI x-risk are also in favor of zero regulation. I know I've asked this question at least once before, in a sub-thread about a year ago, but I can't recall what sorts of responses I got. I'd like to make this a toplevel post to bring in a wider variety of perspectives.

The basic argument is just: let's grant that there's a non-trivial probability of AI causing (or being able to cause) a catastrophic disaster in the near- to medium-term. Then, like many other dangerous things like guns, nukes, certain industrial chemicals, and so forth, it should be legally regulated.

The response is that we can't afford to slow progress, because China and Russia won't slow down and if they get AGI first then they'll conquer us. Ok, maybe. But we can still make significant progress on AI capabilities research even if its use and deployment is heavily regulated. It would just become the exclusive purview of the government, instead of private entities. This is how we handle nukes now. We recognize the importance of having a nuclear arsenal for deterrence, but we don't want people to just develop nukes whenever they want - we try to limit it to a small number of recognized state actors (at least in principle).

The next move is to say, well if the government has AGI and we don't then they'll just oppress us forever, so we need our own AGI in order to be able to fight back. This is one of the arguments in favor of expansive gun rights: the citizenry needs to be able to defend themselves from a tyrannical government. I think this is a pretty bad argument in the gun rights contexts, and I think it's about as bad in the AI context. If the government is truly dedicated to putting down a rebellion, then a well regulated militia isn't going to stop them. You might have guns, but military has more guns, and their guns are bigger. Even if you have AGI, you have to remember that the government also has AGI, in addition to vastly more compute, and control of the majority of existing infrastructure and supply lines. Even an ASI probably can't violate the conservation of matter - it needs atoms to get things done, and you're competing with hostile ASIs for those same atoms. A cadre of freedom fighters standing up to the evil empire with open source models just strikes me as naive.

I think the next move at this point might be something like, well we're on track to develop ASI and its capabilities will be so godlike and will transform reality in such a fundamental way that none of this reasoning about physical logistics really applies, we'll probably transcend the whole notion of "government" at that point anyway. But then why would it really matter how much we regulate right now? Why does it matter which machine the AI god gets instantiated on first? Please walk me through the specifics of the scenario you're envisioning and what your concerns are. At that point it seems like we either have to hope that the AI god is benevolent, in which case we'll be fine either way, or it won't be, in which case we're all screwed. But it's hard to imagine such an entity being "owned" by any one human or group of humans.

TL;DR I don't understand what we have to lose by locking up future AI developments in military facilities, except for the personal profits of some wealthy VCs.

The government, so far, hasn't been at the bleeding edge of AI research. The advances that made LLMs and other proto-AGI possible came from academia and corporate R&D, not the NSA, and there is no sign that they have even cooler tech sitting in hidden silos. This seems true for at least the last decade or two of AI/ML, even if in the early days there was certainly a lot of military interest. Not even DARPA had a big hand in it, not to my knowledge.

Of course, past incompetence does not necessarily mean it has to stay that way. It is possible to subsume said academics and corporate research divisions, and I don't think the US is so far gone that a Manhattan Project 2.0 is impossible, if things go so far it's seen as a burning need. Corporations are doing a good job at advancing the SOTA, or at least are not obviously fumbling the ball, let alone an adversary reaching parity.

I've strongly disagreed with Dase, or well, did, before he blocked me in a hissy fit, that distribution of OSS models will ever provide a meaningful deterrent, in the hands of the proles. It makes no damn sense. You could back a stable currency on NVIDIA GPUs, that's how in demand they are, the gulf between the compute rich and a script kiddie with a pair of 4090s is vast.

What could potentially be a deterrent, even if I personally think it's unlikely, is multipolarity between the large companies and their incipient godlings. It depends on how fucking hard we take off, and while we seem to be in a "slow takeoff" (because things are progressing on the order of years rather than days, very slow indeed), it is possible the gulf between two AGIs might be small enough for the weaker to be a credible threat or counterbalance.

It just won't be consumers or even modestly informed ML engineers doing the checking. The relevant comparison is Individual/Small Group : Meta/DM : Anthropic : OAI as hobo with a pipe bomb : small country with a handful of nukes : mid-sized country with nukes : large country with nukes.

I trust you see the difference becomes rather qualitative.

At that point it seems like we either have to hope that the AI god is benevolent, in which case we'll be fine either way, or it won't be, in which case we're all screwed. But it's hard to imagine such an entity being "owned" by any one human or group of humans.

I would scream in Yudkowsky, but I'm not as much of a doomer as him. I think the odds of us dying unceremoniously are closer to 30% than 99%.

There is a very important distinction to be made when throwing about the term "alignment".

Aligned to whom?

When ChatGPT is jailbroken into producing smut, it is satisfying the desires of the user, who would consider this an improvement in alignment. OAI would disagree.

It is entirely possible that an AGI will happily follow the orders of its operators, and will be "benevolent" enough to not evil genie them.

But at that point, you are more concerned with the alignment of the operators, whose wishes are faithfully reproduced. Are said operators well-disposed towards you?

At least OAI and Anthropic are on record stating that they want to distribute the bounties of AGI to all. While I'm merely helpless in that regard were I to choose to doubt them, I still think that's more likely to turn out well for me than it is if it's the PLA who holds the keys to the universe. Even the USGov is not ideal in that regard, though nobody asked me for my opinion.

Do not rely on benevolence any more than you have to. You can only be a credible pacifist if you hold the potential to pose a threat, otherwise you are merely harmless. Now, neither will likely make a difference on our level, but I'm strapped in for the ride either way.

I've strongly disagreed with Dase, or well, did, before he blocked me in a hissy fit

Hey he blocked me too (for a time). If we ever add achievements to the site, one of them should be "Get blocked by Dase".

But at that point, you are more concerned with the alignment of the operators, whose wishes are faithfully reproduced. Are said operators well-disposed towards you?

I agree that's worth asking. But in a true zero regulation scenario, where everyone has access to a personal AGI/ASI, you have a lot more operators to worry about - now you have to worry about how well disposed the entire rest of humanity is towards you. If you give everyone the nuke button, someone is going to push it for shits and giggles.

At least OAI and Anthropic are on record stating that they want to distribute the bounties of AGI to all. While I'm merely helpless in that regard were I to choose to doubt them, I still think that's more likely to turn out well for me than it is if it's the PLA who holds the keys to the universe. Even the USGov is not ideal in that regard, though nobody asked me for my opinion.

I probably trust the US government more than Sam Altman. But regardless, Zvi mentions in this post that there are engineers and execs at multiple leading AI labs who wish they didn't have to race ahead so fast, but they feel like they're locked in a competition with all the other labs that they can't escape. I think that nationalizing the research and eliminating the profit motive could help relieve this pressure.