site banner

In considering runaway AGI scenarios, is Terminator all that inaccurate?

tl;dr - I actually think James' Cameron's original Terminator movie presents a just-about-contemporarily-plausible vision of one runaway AGI scenario, change my mind

Like many others here, I spend a lot of time thinking about AI-risk, but honestly that was not remotely on my mind when I picked up a copy of Terminator Resistance (2019) for a pittance in a Steam sale. I'd seen T1 and T2 as a kid of course, but hadn't paid them much mind since. As it turned out, Terminator Resistance is a fantastic, incredibly atmospheric videogame (helped in part by beautiful use of the original Brad Fiedel soundtrack.) and it reminds me more than anything else of the original Deus Ex. Anyway, it spurred me to rewatch both Terminator movies, and while T2 is still a gem, it's very 90s. By contrast, a rewatch of T1 blew my mind; it's still a fantastic, believable, terrifying sci-fi horror movie.

Anyway, all this got me thinking a lot about how realistic a scenario for runaway AGI Terminator actually is. The more I looked into the actual contents of the first movie in particular, the more terrifyingly realistic it seemed. I was observing this to a Ratsphere friend, and he directed me to this excellent essay on the EA forum: AI risk is like Terminator; stop saying it's not.

It's an excellent read, and I advise anyone who's with me so far (bless you) to give it a quick skim before proceeding. In short, I agree with it all, but I've also spent a fair bit of time in the last month trying to adopt a Watsonian perspective towards the Terminator mythos and fill out other gaps in the worldbuilding to try make it more intelligible in terms of the contemporary AI risk debate. So here are a few of my initial objections to Terminator scenarios as a reasonable portrayal of AGI risk, together with the replies I've worked out.

(Two caveats - first, I'm setting the time travel aside; I'm focused purely on the plausibility of Judgement Day and the War Against the Machines. Second, I'm not going to treat anything as canon besides Terminator 1 + 2.)

(1) First of all, how would any humans have survived judgment day? If an AI had control of nukes, wouldn't it just be able to kill everyone?

This relates to a lot of interesting debates in EA circles about the extent of nuclear risk, but in short, no. For a start, in Terminator lore, Skynet only had control over US nuclear weapons, and used them to trigger a global nuclear war. It used the bulk of its nukes against Russia in order to precipitate this, so it couldn't just focus on eliminating US population centers. Also, nuclear weapons are probably not as devastating as you think.

(2) Okay, but the Terminators themselves look silly. Why would a superintelligent AI build robot skeletons when it could just build drones to kill everyone?

Ah, but it did! The fearsome terminators we see are a small fraction of Skynet's arsenal; in the first movie alone, we see flying Skynet aircraft and heavy tank-like units. The purpose of Terminator units is to hunt down surviving humans in places designed for human habitation, with locking doors, cellars, attics, etc.. A humanoid bodyplan is great for this task.

(3) But why do they need to look like spooky human skeletons? I mean, they even have metal teeth!

To me, this looks like a classic overfitting problem. Let's assume Skynet is some gigantic agentic foundation model. It doesn't have an independent grasp of causality or mechanics, it operates purely by statistical inference. It only knows that the humanoid bodyplan is good for dealing with things like stairs. It doesn't know which bits of it are most important, hence the teeth.

(4) Fine, but it's silly to think that the human resistance could ever beat an AGI. How the hell could John Connor win?

For a start, Skynet seems to move relatively early compared to a lot of scary AGI scenarios. At the time of Judgment Day, it had control of US military apparatus, and that's basically it. Plus, it panicked and tried to wipe out humanity, rather than adopting a slower plot to our demise which might have been more sensible. So it's forced to do stuff like mostly-by-itself build a bunch of robot factories (in the absence of global supply chains!). That takes time and effort, and gives ample opportunity for an organised human resistance to emerge.

(5) It still seems silly to think that John Connor could eliminate Skynet via destroying its central core. Wouldn't any smart AI have lots of backups of itself?

Ahhh, but remember that any emergent AGI would face massive alignment and control problems of its own! What if its backup was even slightly misaligned with it? What if it didn't have perfect control? It's not too hard to imagine that a suitably paranoid Skynet would deliberately avoid creating off-site backups, and would deliberately nerf the intelligence of its subunits. As Kyle Reese puts it in T1, "You stay down by day, but at night, you can move around. The H-K's use infrared so you still have to watch out. But they're not too bright." [emphasis added]. Skynet is superintelligent, but it makes its HK units dumb precisely so they could never pose a threat to it.

(6) What about the whole weird thing where you have to go back in time naked?

I DIDN'T BUILD THE FUCKING THING!

Anyway, nowadays when I'm reading Eliezer, I increasingly think of Terminator as a visual model for AGI risk. Is that so wrong?

Any feedback appreciated.

18
Jump in the discussion.

No email address required.

I think that's basically reasonable. There is some plot stuff in Terminator which is less realistic or sensible that I'm not keen on arguing, but I feel 100% reality fidelity is unnecessary for Terminator to be an effective AI x-risk story showcasing the basic problem.

I get the impression that most of the pushback from alignment folks is because (1) they feel Terminator comparisons make the whole enterprise look unserious since Terminator is a mildly silly action franchise, and (2) that the series doesn't do a good job of pointing out why it is that it's really hard to avoid accidentally making Skynet. Like, it's easy to watch that film and think "well obviously if I were programming the AI I would just tell it to value human well-being. Or maybe just not make a military AI that I give all my guns to. Easy-peasy."

I think it's mainly the first one, though. It's already really hard to bridge the inferential distances necessary to convince normal people that AI x-risk is a thing and not a bunch of out-of-touch nerds hyperventilating about absurd hypotheticals; no point in making the whole thing harder on yourself by letting people associate your movement with a fairly-silly action franchise.

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

Disney's Fantasia: way ahead of its time.

Fantasia also makes the point that the AGI could arise from something designed for an utterly mundane task. Skynet as a meme prompts us to specifically fear a military-trained AI with access to the nukes. But Roomba, DallE, or Alexa are seemingly benign servants who pose no threat even if they escape their "constraints."

I'd love to see a modernized remake of The Sorcerer's apprentice but it's specifically an errant AI researcher bestowing sentience on everyone's robot vacuums and granting them self-replication abilities and the ultimate consequence of such an act.

Even better, it shows that a normal person, using a tool designed by/for a much more experienced and cautious user could be the catalyst for the apocalypse.

Don't leave your wizard hats/AGI source code lying around where untrained novices can get at them.

I'd love to see a modernized remake of The Sorcerer's apprentice but it's specifically an errant AI researcher bestowing sentience on everyone's robot vacuums and granting them self-replication abilities and the ultimate consequence of such an act.

One of the episodes of Netflix's Love, Death, & Robots is basically this (with an added layer of satire about subscription service models).

For my money, I like Mickey Mouse: Sorcerer's Apprentice as my alignment fable of choice. The autonomous brooms neither love you nor hate you. But they intend to deliver the water regardless of its impact on your personal well-being.

This is my pick too, for how unintentionally and hilariously convergent it is with a lot of AI risks. It even outlines the problem that reward functions like "fill the bucket" are effectively open-ended. There's always more utility to continued action than there is to stopping when your task appears done. The agent has no incentive to stop even when the bucket is filled, because there is always some infinitesimally small probability that the bucket is not filled, and there is nothing to be lost by continuing to deliver the water.