site banner

AI and the military: Highlights from the RAeS Future Combat Air & Space Capabilities Summit

aerosociety.com

AI post. Never made a top-level post before, plz let me know what I'm doing wrong.

Quote from part of the article:

one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

2
Jump in the discussion.

No email address required.

This 'simulation' is basically a thought experiment. And frankly the whole story didn't make sense anyway.

Ah yes, the bit that got Lt. Col. Tucker “Cinco” Hamilton into trouble, because the way he told it made it sound like "it really happened, guys!" and then he had to later clarify that this was only a thought-experiment. Or something. Maybe it was just him and a bunch of the guys blueskying about this kind of thing over a few cans of Bud Light?:

But in a statement to Insider, the US air force spokesperson Ann Stefanek denied any such simulation had taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

I think I'm not going to be checking outside my windows for murderbot AI drones quite yet 😁

Very interesting, thanks. Will keep this up to give your rebuttal visibility.

why would the drone/AI know the specific location of the operator or the "communication tower" t

Given how much EW is going on, you'd want to use directional transmission, no ? So locations of transmitters that whose orders it's supposed to check would be something an autonomous drone would keep track of, I believe.

I read that story and something smelled fishy, mostly because it seemed too familiar, down to some of the specific details. I don't remember where I heard/read it on the internet before, but I must have.

It's the basic story from AI stop button/corrigibility in the video by Robert Miles, the hero of midwitted doomers.

I mean, you'd want it to know where its infrastructure is so you can train it to protect that infrastructure. That does make some sense.