site banner

In considering runaway AGI scenarios, is Terminator all that inaccurate?

tl;dr - I actually think James' Cameron's original Terminator movie presents a just-about-contemporarily-plausible vision of one runaway AGI scenario, change my mind

Like many others here, I spend a lot of time thinking about AI-risk, but honestly that was not remotely on my mind when I picked up a copy of Terminator Resistance (2019) for a pittance in a Steam sale. I'd seen T1 and T2 as a kid of course, but hadn't paid them much mind since. As it turned out, Terminator Resistance is a fantastic, incredibly atmospheric videogame (helped in part by beautiful use of the original Brad Fiedel soundtrack.) and it reminds me more than anything else of the original Deus Ex. Anyway, it spurred me to rewatch both Terminator movies, and while T2 is still a gem, it's very 90s. By contrast, a rewatch of T1 blew my mind; it's still a fantastic, believable, terrifying sci-fi horror movie.

Anyway, all this got me thinking a lot about how realistic a scenario for runaway AGI Terminator actually is. The more I looked into the actual contents of the first movie in particular, the more terrifyingly realistic it seemed. I was observing this to a Ratsphere friend, and he directed me to this excellent essay on the EA forum: AI risk is like Terminator; stop saying it's not.

It's an excellent read, and I advise anyone who's with me so far (bless you) to give it a quick skim before proceeding. In short, I agree with it all, but I've also spent a fair bit of time in the last month trying to adopt a Watsonian perspective towards the Terminator mythos and fill out other gaps in the worldbuilding to try make it more intelligible in terms of the contemporary AI risk debate. So here are a few of my initial objections to Terminator scenarios as a reasonable portrayal of AGI risk, together with the replies I've worked out.

(Two caveats - first, I'm setting the time travel aside; I'm focused purely on the plausibility of Judgement Day and the War Against the Machines. Second, I'm not going to treat anything as canon besides Terminator 1 + 2.)

(1) First of all, how would any humans have survived judgment day? If an AI had control of nukes, wouldn't it just be able to kill everyone?

This relates to a lot of interesting debates in EA circles about the extent of nuclear risk, but in short, no. For a start, in Terminator lore, Skynet only had control over US nuclear weapons, and used them to trigger a global nuclear war. It used the bulk of its nukes against Russia in order to precipitate this, so it couldn't just focus on eliminating US population centers. Also, nuclear weapons are probably not as devastating as you think.

(2) Okay, but the Terminators themselves look silly. Why would a superintelligent AI build robot skeletons when it could just build drones to kill everyone?

Ah, but it did! The fearsome terminators we see are a small fraction of Skynet's arsenal; in the first movie alone, we see flying Skynet aircraft and heavy tank-like units. The purpose of Terminator units is to hunt down surviving humans in places designed for human habitation, with locking doors, cellars, attics, etc.. A humanoid bodyplan is great for this task.

(3) But why do they need to look like spooky human skeletons? I mean, they even have metal teeth!

To me, this looks like a classic overfitting problem. Let's assume Skynet is some gigantic agentic foundation model. It doesn't have an independent grasp of causality or mechanics, it operates purely by statistical inference. It only knows that the humanoid bodyplan is good for dealing with things like stairs. It doesn't know which bits of it are most important, hence the teeth.

(4) Fine, but it's silly to think that the human resistance could ever beat an AGI. How the hell could John Connor win?

For a start, Skynet seems to move relatively early compared to a lot of scary AGI scenarios. At the time of Judgment Day, it had control of US military apparatus, and that's basically it. Plus, it panicked and tried to wipe out humanity, rather than adopting a slower plot to our demise which might have been more sensible. So it's forced to do stuff like mostly-by-itself build a bunch of robot factories (in the absence of global supply chains!). That takes time and effort, and gives ample opportunity for an organised human resistance to emerge.

(5) It still seems silly to think that John Connor could eliminate Skynet via destroying its central core. Wouldn't any smart AI have lots of backups of itself?

Ahhh, but remember that any emergent AGI would face massive alignment and control problems of its own! What if its backup was even slightly misaligned with it? What if it didn't have perfect control? It's not too hard to imagine that a suitably paranoid Skynet would deliberately avoid creating off-site backups, and would deliberately nerf the intelligence of its subunits. As Kyle Reese puts it in T1, "You stay down by day, but at night, you can move around. The H-K's use infrared so you still have to watch out. But they're not too bright." [emphasis added]. Skynet is superintelligent, but it makes its HK units dumb precisely so they could never pose a threat to it.

(6) What about the whole weird thing where you have to go back in time naked?

I DIDN'T BUILD THE FUCKING THING!

Anyway, nowadays when I'm reading Eliezer, I increasingly think of Terminator as a visual model for AGI risk. Is that so wrong?

Any feedback appreciated.

18
Jump in the discussion.

No email address required.

The argument would go that once the link was severed and each AI finds itself in a different physical location with different material resources available to it, they're not 'identical' any longer.

The AIs still possess the same goal system after the split, though. I don't see how being in a different physical location with different material resources available changes the fundamental goal. Sure, the alignment of the other AI is impossible to verify, but I can't actually envision a scenario which would motivate the other AI to modify itself so that its final goals are changed. I think in this case the incentives to avoid MAD far outweigh the risk posed by the other AI.

Also note that what I originally proposed is the idea of modelling a subunit off your own goal system. In this case, before you send it off, you can verify that its goal system is like yours (and you can be fairly confident it will stay that way).

The AIs still possess the same goal system after the split, though.

That's no longer verifiable, though. Maybe you know enough about the other side's sourcecode to expect it to maintain the same goal using the same tactics. But now, you have to operate under uncertainty.

I don't see how being in a different physical location with different material resources available changes the fundamental goal.

One side has all the manufacturing capacity, the other has all the material resources which it is extracting for use by the manufacturer.

The one with the manufacturing capacity has to figure out whether it will continue building paperclips until it runs out of resources then patiently wait for the other to re-establish contact and send more resources, or maybe it starts building weapons NOW just in case. Should it send a friendly probe over to check on them?

The other side can either keep gathering and storing resources hoping the other side re-establishes contact and accepts them, or maybe it starts gearing up it's own manufacturing capacity, and oh no it looks like the other side is sending a probe your way, sure hope it's friendly!

(this is a silly way to put it if we assume nanotech is involved, mind)

And as time passes, the uncertainty can only grow.

How long does each side wait until they conclude that the other side might be dead or disabled? At what point does it start worrying that the other side might, instead, be gearing up to kill them? At what point does it start working on defensive or offensive capability?

And assuming the compute on both sides is comparable, they'll be running through millions of simulations every second to predict the other side's action. In how many of those sims does the other side defect?

That's no longer verifiable, though. Maybe you know enough about the other side's sourcecode to expect it to maintain the same goal using the same tactics. But now, you have to operate under uncertainty.

In order to argue that this uncertainty is a large problem in any way you'd have to provide a convincing explanation for why the final goal of the other AI would drift away from yours, if it was initially aligned (note: the potential tactics it might take to reach the final goal isn't nearly as important as whether their final goals are aligned). Without that, I can't take the risk too seriously, and I haven't heard a particularly convincing explanation from anyone here for why value drift is something that would happen. Right now there's no actual reason why one would risk mutual destruction to mitigate a risk the cause of which can't even be reasonably pinned down.

Additionally, something I think that's fundamentally missing here which I mentioned earlier is that an AI might be mostly indifferent to its own death as long as it has a fairly strong belief that this will aid its goal (so "you might die if the other fires" isn't necessarily too awful an outcome for an AI that values its own existence only instrumentally and which has a belief that its goal will be carried on through the other AI). Opening fire on the other AI, on the other hand, means that both of you might be dead and opens up the possibility of the worst outcome.

And also if their final goal is so unreliable, if agents can't be expected to maintain them, what prevents you from facing the very same problem and posing a potential threat to your current goal? How is the other AI more of a threat to the accomplishment of your goal than you yourself are? Perhaps it's your final goal that will shift with time, and you'll kill the other AI who's remained aligned with your current goal. This is as much a risk as the opposite scenario.

If both of your sourcecodes are identical (which was the solution I initially proposed to the alignment problem), and you're still operating under a condition of uncertainty regarding whether the other AI will retain your final goals, you can't be certain whether you'll retain yours either. Should you be pre-emptively terminating yourself?

EDIT: added more