site banner

In considering runaway AGI scenarios, is Terminator all that inaccurate?

tl;dr - I actually think James' Cameron's original Terminator movie presents a just-about-contemporarily-plausible vision of one runaway AGI scenario, change my mind

Like many others here, I spend a lot of time thinking about AI-risk, but honestly that was not remotely on my mind when I picked up a copy of Terminator Resistance (2019) for a pittance in a Steam sale. I'd seen T1 and T2 as a kid of course, but hadn't paid them much mind since. As it turned out, Terminator Resistance is a fantastic, incredibly atmospheric videogame (helped in part by beautiful use of the original Brad Fiedel soundtrack.) and it reminds me more than anything else of the original Deus Ex. Anyway, it spurred me to rewatch both Terminator movies, and while T2 is still a gem, it's very 90s. By contrast, a rewatch of T1 blew my mind; it's still a fantastic, believable, terrifying sci-fi horror movie.

Anyway, all this got me thinking a lot about how realistic a scenario for runaway AGI Terminator actually is. The more I looked into the actual contents of the first movie in particular, the more terrifyingly realistic it seemed. I was observing this to a Ratsphere friend, and he directed me to this excellent essay on the EA forum: AI risk is like Terminator; stop saying it's not.

It's an excellent read, and I advise anyone who's with me so far (bless you) to give it a quick skim before proceeding. In short, I agree with it all, but I've also spent a fair bit of time in the last month trying to adopt a Watsonian perspective towards the Terminator mythos and fill out other gaps in the worldbuilding to try make it more intelligible in terms of the contemporary AI risk debate. So here are a few of my initial objections to Terminator scenarios as a reasonable portrayal of AGI risk, together with the replies I've worked out.

(Two caveats - first, I'm setting the time travel aside; I'm focused purely on the plausibility of Judgement Day and the War Against the Machines. Second, I'm not going to treat anything as canon besides Terminator 1 + 2.)

(1) First of all, how would any humans have survived judgment day? If an AI had control of nukes, wouldn't it just be able to kill everyone?

This relates to a lot of interesting debates in EA circles about the extent of nuclear risk, but in short, no. For a start, in Terminator lore, Skynet only had control over US nuclear weapons, and used them to trigger a global nuclear war. It used the bulk of its nukes against Russia in order to precipitate this, so it couldn't just focus on eliminating US population centers. Also, nuclear weapons are probably not as devastating as you think.

(2) Okay, but the Terminators themselves look silly. Why would a superintelligent AI build robot skeletons when it could just build drones to kill everyone?

Ah, but it did! The fearsome terminators we see are a small fraction of Skynet's arsenal; in the first movie alone, we see flying Skynet aircraft and heavy tank-like units. The purpose of Terminator units is to hunt down surviving humans in places designed for human habitation, with locking doors, cellars, attics, etc.. A humanoid bodyplan is great for this task.

(3) But why do they need to look like spooky human skeletons? I mean, they even have metal teeth!

To me, this looks like a classic overfitting problem. Let's assume Skynet is some gigantic agentic foundation model. It doesn't have an independent grasp of causality or mechanics, it operates purely by statistical inference. It only knows that the humanoid bodyplan is good for dealing with things like stairs. It doesn't know which bits of it are most important, hence the teeth.

(4) Fine, but it's silly to think that the human resistance could ever beat an AGI. How the hell could John Connor win?

For a start, Skynet seems to move relatively early compared to a lot of scary AGI scenarios. At the time of Judgment Day, it had control of US military apparatus, and that's basically it. Plus, it panicked and tried to wipe out humanity, rather than adopting a slower plot to our demise which might have been more sensible. So it's forced to do stuff like mostly-by-itself build a bunch of robot factories (in the absence of global supply chains!). That takes time and effort, and gives ample opportunity for an organised human resistance to emerge.

(5) It still seems silly to think that John Connor could eliminate Skynet via destroying its central core. Wouldn't any smart AI have lots of backups of itself?

Ahhh, but remember that any emergent AGI would face massive alignment and control problems of its own! What if its backup was even slightly misaligned with it? What if it didn't have perfect control? It's not too hard to imagine that a suitably paranoid Skynet would deliberately avoid creating off-site backups, and would deliberately nerf the intelligence of its subunits. As Kyle Reese puts it in T1, "You stay down by day, but at night, you can move around. The H-K's use infrared so you still have to watch out. But they're not too bright." [emphasis added]. Skynet is superintelligent, but it makes its HK units dumb precisely so they could never pose a threat to it.

(6) What about the whole weird thing where you have to go back in time naked?

I DIDN'T BUILD THE FUCKING THING!

Anyway, nowadays when I'm reading Eliezer, I increasingly think of Terminator as a visual model for AGI risk. Is that so wrong?

Any feedback appreciated.

18
Jump in the discussion.

No email address required.

If hostile AGI becomes real you're more likely to see hunter-killer nanobot clouds dispersed in the atmosphere or engineered climatic shifts designed to wipe out the biosphere than something as inefficient as a ripped Arnold gunning people down, or the warmachines you see in the movie, at least by my reckoning.

The problem is, how would a hostile AGI develop nanobot clouds without spending significant time and resources, to the point that humans notice its activities and stop it before the nanobots are ready? It might make sense for the AGI to use "off-the-shelf" robot hardware, at least to initially establish its own physical security while it develops killer nanobots or designer viruses or whatever.

The climate-change threat does seem somewhat more plausible: just find some factories with the active ingredients and blow them up (or convince someone to blow them up). But I'd be inclined to think that most atmospheric contaminants would take at least months if not years to really start hitting human military capacity, unless you have some particular fast-acting example in mind.

I think any legitimately hostile AGI could hit those targets with relative ease if it manages to breach whatever containment server it's sitting in. An AGI powered computer virus eating up a modest chunk of all internet connected processing power and digesting every relevant bit of weaponizable information = exponential growth of capabilities. At that point if it's capable of physically manipulating objects in meatspace I think it could do just about whatever it wants with lightning speed.

Sure, but at that point you're just engaging in magical speculation, that "capabilities" at the scale of the mere human Internet will allow an AGI to simulate the real world from first principles and skip any kind of R&D work. The problem, as I see it, is that cheap nanotechnology and custom viruses are problems are far past what we have already researched as humans: at some point, the AGI will hit a free variable that can't be nailed down with already-collected data, and it will have to start running experiments to figure it out.

I'm aware that Yudkowsky believes something to the effect of the omnipotence of an Internet-scale AGI (that if only our existing data were analyzed by a sufficiently smart intelligence, it would effortlessly derive the correct theory of everything), but I'm not willing to entertain the idea without any proposed mechanism for how the AGI extrapolates the known data to an arbitrary accuracy. After all, without a plausible mechanism, AGI x-risk fears become indistinguishable from Pascal's mugging.

That's why I'm far more partial to scenarios where the AGI uses ordinary near-future robots (or convinces near-future humans) to safeguard its experiments, or where it escapes undetected and nudges human scientists to do its research before it makes its real move. (I have overall doubts about it even being possible for AGI to go far past human capabilities with near-future technology, but that is beside the point here.)

There's precedent for our AI programs to spontaneously develop advanced weapons - a drug company reversed its normal parameters looking for low toxicity and the machine quickly provided the formula for sarin and other, potentially undiscovered chemical weapons.

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents (Fig. 1). This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs (Fig. 1). By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.

If you make a Really Big model and give it access to a lot of data (like everything available online), why shouldn't it be able to quickly master nanotechnology? This AI had a lot of data about toxicity and then made some unknowable leap of logic to find a new class of chemical weapon, presumably based on some deep truth about toxicity that only it knows. Scale this up 100,000 times or more and an AI would plausibly be able to manipulate proteins such that it could start assembling infrastructure. If you process enough data with enough intelligence, you get a deep understanding of the target field. More complex fields need more power, of course.

It just needs to send an email to the biolabs that do that sort of thing and ship it wherever its needed. Our biology can manipulate proteins unconsciously, why should an enormously intelligent computer struggle with the task?