site banner

Culture War Roundup for the week of November 28, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

Firstly, I think it's likely that the first AI that we build that attains "human-level reasoning" (in whatever rough measure of "reasoning per unit of time") will be pretty close to at least a local maximum of compute capabilities, and won't easily be scaled up by a factor of 1000 over night. Secondly, I'm not quite convinced that even if that scaling-up were possible, this would necessarily translate to world-shattering capability, because the object in question is still a lone AI, not corporeal and facing an organised society of humans that are primed to distrust it and control the power switch. I'm not so sure that the Hitler head in a jar, where the jar also runs on very sensitive and supply-chain-dependent equipment, could be reliably expected to take over the world even if it were given a 1000:1 computation speed advantage and perfect memory; the "find the right sequence of words to sway the heart of any mortal with 100% certainty" trope seems oversold to me. I'm aware of Eliezer's old "I'll persuade you to unbox me" experiments, too, but those to me seemed like an unrealistic model of the problem in question. (Maybe if several people not participating in the chat also at all times had the option to go and permanently delete Eliezer with minimal personal consequences, and the twitchy finger to do so based on observations like "this guy who said he was going to talk to Jar Hitler is taking far too long"...)

Of course this is all probabilistic, but I explained in a parallel subthread why I take even low-probability ways in which the whole thing could fail to work out to be important. To break my acceptance of the MIRI agenda, it is sufficient to establish that the probability of our current path towards runaway AGI culminating in its success is significantly lower than 90something%.

Firstly, I think it's likely that the first AI that we build that attains "human-level reasoning" (in whatever rough measure of "reasoning per unit of time") will be pretty close to at least a local maximum of compute capabilities, and won't easily be scaled up by a factor of 1000 over night.

Why? None of the current neural networks represent a maximum of compute for their host company, or even within an oom.

I realise that the statement was a bit facile, but in concrete terms arbitrary scaling doesn't actually seem to be a problem that has been solved for deep learning so far, and given the advances that were made without it, it's not clear that it will be by the time we reach the human level. Here, for instance, is OpenAI talking about the difficulties with the distributed training process they've set up, which seems to be bounded by nonlinear-in-#machines overheads that in turn generate demand for state on each machine which itself is running up to the limits of RAM and VRAM that is available for single machines with modern hardware. If that's the issue, then the existence of hundreds of thousands of more nodes at Azure (if ones with the right kind of hardware indeed exist) may not matter, because you could not make them train the same network in parallel.

On the other hand, one could imagine that the "continued learning" process of the hypothetical superhuman AI would not involve further training of the network but instead some other more legible mechanism, such as it populating a database of facts; in that case, however, it would start exhibiting scaling problems that very much resemble the scaling problems of meat humans. That is, you can easily improve 'software' like memes and theories but not 'hardware' like brain architecture (which, for the AI, would be the weights and the design of the network), and the 'software' has soft limits to possible returns; also, we still haven't really dealt with the problem of running a trained instance of AI in a distributed fashion rather than a single machine, so even if the AI can acquire lots of compute nodes that are good enough to run one copy (no guarantee; easily hacked Chinese toasters don't come with A100s, and my impression is that when you go on cloud services nowadays the really high-end GPU options all have low availability, implying that they are not particularly overprovisioned) all it could do would be running autonomous copies of itself on them, which would have to coordinate through some channel that is much more bounded than "share brain state" like a collective of humans who have no better option than to talk to each other.