site banner

Culture War Roundup for the week of November 28, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

Regarding AI alignment -

I'm aware of and share @DaseindustriesLtd's aesthetical objection that the AI safety movement is not terribly aligned with my values itself and the payoff expectation of letting them perform their "pivotal act" that involves deputy godhood for themselves does not look so attractive from the outside, but the overall Pascal's Mugging performed by Yudkowsky, TheZvi etc. as linked downthread really does seem fairly persuasive as long as you accept the assumptions that they make. With all that being said, to me the weakest link of their narrative always actually has been in a different part than either the utility of their proposed eschaton or the probability that an AGI becomes Clippy, and I've seen very little discussion of the part that bothers me though I may not have looked well enough.

Specifically, it seems to me that everyone in the field accepts as gospel the assumption that AGI takeoff would (1) be very fast (minimal time from (1+\varepsilon) human capability to C*human capability for some C on the order of theoretical upper bounds) and (2) irreversible (P(the most intelligent agent on Earth will be an AGI n units of time in the future | the most intelligent agent on Earth is an AGI now) ~= 1). I've never seen the argument for either of these two made in any other way than repetition and a sort of obnoxious insinuation that if you don't see them as self-evident you must be kind of dull. Yet, I remain far from convinced of either (though, to be clear, it's not like I'm not convinced of their negations).

Regarding (1), the first piece of natural counterevidence to me is the existence of natural human variation in intelligence. I'm sure you don't need me to sketch in detail an explanation of why the superintelligent-relative-to-baseline Ashkenazim, or East Asians, or John von Neumann himself didn't undergo a personal intelligence explosion, but whence the certainty that this explanation won't in part or full also be relevant for superintelligent AGIs we construct? Sure, there is a certain argument that computer programs are easier to reproduce, modify and iterate upon than wetware, but this advantage is surely not infinitely large, and we do not even have the understanding to quantify this advantage in natural units. "Improving a silicon-based AI is easier than humans, therefore assume it will self-improve about instantaneously even though humans didn't" is extremely facile. It took humans like 10k years of urbanised society to get to the point where building something superior to humans at general reasoning seems within grasp. Even if that next thing is much better than us, how do we know if moving another step beyond that will take 5k, 1k, 100, 10 or 1 year, or minutes? The superhuman AIs we build may well come with their own set of architectural constraints that force them into a hard-to-leave local minimum, too. If the Infante Eschaton is actually a transformer talking to itself, how do we know it won't be forever tied down by an unfortunately utterly insurmountable tendency to exhibit tics in response to Tumblr memes in its token stream that we accidentally built into it, or a hidden high-order term in the cost/performance function for the entire transformer architecture and anything like it, for a sweet 100 years where we get AI Jeeves but not much more?

Secondly, I'm actually very partial to the interpretation that we have already built "superhuman AGI", in the shape of corporations. I realise this sounds like a trite anticapitalist trope, but being put on a bingo board is not a refutation. It may seem like an edge case given the queer computational substrate, but at the same time I'm struggling to find a good definition of superhuman AGI that naturally does not cover them. They are markedly non-human, have their own value function that their computational substrate is compelled to optimise for (fiduciary duty), and exhibit capacities in excess of any human (which is what makes them so useful). Put differently, if an AI built by Google on GPUs does ascend to Yudkowskian godhood, in the process rebuilding itself on nanomachines and then on computronium, what's the reason for the alien historian looking upon the simulation from the outside to place the starting point of "the singularity" specifically at the moment that Google launched the GPU version of the AI to further Google's goals, as opposed to when the GPU AI launched the nanomachine AI in furtherance of its own goals, or when humans launched the human-workers version of Google to further their human goals? Of all these points, the last one seems to be the most special one to me, because it marks the beginning of the chain where intelligent agents deliberately construct more intelligent agents in furtherance of their goals. However, if the descent towards the singularity has already started, so far it's been taking its sweet time. Why do we expect a crazy acceleration at the next step, apart from the ancient human tendency to believe ourselves to be living in the most special of times?

Regarding (2), even if $sv_business or $three_letter_agency builds a superhuman AI that is rapidly going critical, what's to say this won't be spotted and quickly corroborated by an assortment of Russian and/or Chinese spies, and those governments don't have some protocol in place that will result in them preemptively unloading their nuclear arsenal on every industrial center in the US? If the nukes land, the reversal criterion will probably be satisfied, and it's likely enough that the AI will be large enough and depend on sufficiently special hardware that it can't just quickly evacuate itself to AWS Antarctica. At that point, the AI may already be significantly smarter than humans, without having the capability to resist. Certainly the Yudkowsky scenario of bribing people into synthesising the appropriate nanomachine peptides can't be executed on 30 minutes' notice, and I doubt even a room full of uber-von Neumanns on amphetamines (especially ones bound to the wheelchair of specialty hardware and reliably electricity supply) could contrive a way to save itself from 50 oncoming nukes in that timespan. Of course this particular class of scenario may have very low probability, but I do not think that that probability is 0; and the more slowness and perhaps also fragility of early superhuman AIs we are willing to concede per point (1), the more opportunities for individually low-probability reversals like this arise.

All in all, I'm left with a far lower subjective belief that the LW-canon AGI apocalypse will happen as described than Yudkowsky's near-certainty that seems to be offset only by black swan events before the silicon AGI comes into being. I'm gravitating towards putting something like a 20% probability on it, without being at all confident in my napkinless mental Bayesianism, which is of course still very high for x-risk but makes the proposed "grow the probability of totalitarian EA machine god" countermeasure look much less attractive. It would be interesting to see if something along the lines of my thoughts above has already been argued against in the community, or if there is some qualitative (because I consider the quantitative aspect to be a bit hopeless) flaw in my lines of reasoning that stands out to the Motte.

I find the corporation analogy pretty interesting/compelling as well.

It was brought up in this big LessWrong post recently and I didn't find any of the counterarguments in the comments to be very strong (though most people focused on other arguments).

Imagine a corporation that wasn't thoroughly embedded in our social, historical, or moral environment, and had employees and managers substantially smarter and faster at execution than humans. And this corporation can produce more super-employees and super-managers just by hitting silicon wafers with ultraviolet light, as opposed to recruiting existing humans who have human instincts and desires. That might be a problem, right?

Unless I'm mistaken the argument is something like "once we build an intelligent, goal-orient agent smarter than any human on earth, it will quickly bootstrap itself to godhood and then destroy the planet and probably the galaxy and maybe the universe."

But as far as I can tell, corporations already meet this definition. They are inhuman, goal-oriented agents smarter than any given human on earth (by the combined intelligence of all their human constituent parts). The fact that they're made up of humans doesn't seem to be all that relevant, because the corporation itself is not human despite humans being the "material" from which it is made.

The fact that they're made up of humans doesn't seem to be all that relevant, because the corporation itself is not human despite humans being the "material" from which it is made.

The problem with corpos being made up of humans is similar to trying to make ever better computers without changing transistor size. You can optimize the layout, cooling, etc, but you'll forever be bound by the size. Corpo capabilities and architecture are chained by their components. They would be a lot more dangerous if they could produce better humans at scale (compare the performance of Jane Street vs retail investors, or special forces vs green Army grunts), or produce a new part to do mental and social work (AI).

Isn't the whole point of the argument that AI will be such a threat because it will, by virtue of being more intelligent than us, be able to breezily figure things out (like self-improvement) that we simply couldn't because of our inferior intelligences? If that's the case it doesn't seem to matter that much that corporations (or as pointed out below, any form of supra-human coordination, states, political parties, etc.) have certain limitations at the outset, because their 'superintelligence' ought to allow them to overcome those limitations in short order. After all the self-improvement scenario also assumes that AI is limited at the outset but rapidly transcends these limits.

Right, but corporations that are staffed by humans aren't smarter than humans and can't become smarter than humans. "Being a corporation" doesn't remove the scaling limits that constrain the human brain in specific. If you remove that limiting factor, then yes, corporations are scary too.

A corporation (really, any human organization--I think I'll just say that going forward) is smarter than any individual human that comprises it, by virtue of being comprised of many different intelligences. Likely, any (or at least most) human organization is smarter than any individual human on earth, since it is the sum total of all the human intelligences that make it up. This is comparable to the oft-repeated hypothetical where AI bootstraps by copying itself many times over. So I think it is fair to describe a human organization as a "superintelligence" in the same sense meant by AI x-risk proponents.

I think intelligence as a single axis really breaks down here. Well-run organisations can beat humans in specific ways — better parallelization, less likely to get bored/tired, wider and deeper expertise — but often not in the ways that are really interesting. (If von Neumann joined as an entry-level employee at some megacorp today, would the organisation become smarter than him in any reasonable sense?)

Orgs seem good at gluing together boring competencies and shoring up human shortcomings, but we haven't figured out the interesting stuff yet — we have no idea how to assemble 1000 mediocre writers into a Steinbeck or 1000 mediocre physicists into a Feynman.

So I think "superintelligence" is the wrong word for orgs. "Superhuman", yeah, in the more limited sense that a horse or a plane is superhuman in some capacities. But we're not at the point (yet) where we've cracked the alchemy of coordinating lots of human intelligences into an organisational superintelligence. So I think that's the critical difference between orgs rn and actual x-risk from superintelligences

More comments