site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Was a bit surprised to see this hadn't been posted yet, but yesterday Yudkowsky wrote an op-ed in TIME magazine where he describes the kind of regime that he believes would be necessary to throttle AI progress:

https://archive.is/A1u57

Some choice excerpts:

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

if its presence in the CW thread needs justifying, well, it's published in a major magazine and the kinds of policy proposals set forth would certainly ignite heated political debate were they ever to be seriously considered.

"Yudkowsky airstrike threshold" has already become a minor meme on rat and AI twitter.

I see we're back to trying to outlaw mathematics. I encourage everyone to read this article by Stephen Wolfram describing how LLMs work before panicking. I cannot understand the degree to which LLMs have apparently broken some people's brains.

I'm not sure why you find that article reassuring. Wait until you hear about the shitty hardware that human brains run on, only 30 Watts! Yud isn't even saying that the current LLMs are all that dangerous, he's saying that we're pouring 10B/y and all the top talent into overcoming any limitations to making them as smart or smarter than humans. What would make you scared?

I do not think the takeaway from the article is about the hardware that LLMs are being run on. It's about the way LLM's function. The LLM doesn't understand the content of the query or its response the way you or I do. It just understands them as probabilistic sequences of tokens and its job is to predict the tokens that should come next. An interaction I recount in another comment showcases this issue. I point to the article because it is not clear to me that what LLMs do (token prediction) is the kind of thing that can be extrapolated to the dangers people like Yudkowksy are worried about with respect to unfriendly AI.

What would make you scared?

If we had an AI that actually understood the meaning of what it was being asked.

The LLM doesn't understand the content of the query or its response the way you or I do. It just understands them as probabilistic sequences of tokens and its job is to predict the tokens that should come next.

This to me seems like a pretty shallow explanation of understanding and the same criticism can be applied also to humans. According to some people like Scott Alexander, human brain is "just" a multi-layer prediction machine. It seems that the feeling of understanding itself is nothing extra special, some people on drugs like LSD feel as if they cracked the code and now understand the whole universe and their place in it. In practice understanding can be viewed as ability to give correct output given an input. We do not have access to many other methods, that is why we use tests to see if students understand things they learned.

Additionally I do not think that saying that the LLM doesn't understand the content of the query or its response the way you or I do. is that much reassuring. Quite to the contrary - LLMs give correct answer to very large set of problems and yet obviously they came to that place using completely different approach compared to humans. This makes them more alien, more inscrutable and thus more dangerous in my eyes.