AI post. Never made a top-level post before, plz let me know what I'm doing wrong.
Quote from part of the article:
one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.
Jump in the discussion.
No email address required.
Notes -
Ah yes, the bit that got Lt. Col. Tucker “Cinco” Hamilton into trouble, because the way he told it made it sound like "it really happened, guys!" and then he had to later clarify that this was only a thought-experiment. Or something. Maybe it was just him and a bunch of the guys blueskying about this kind of thing over a few cans of Bud Light?:
I think I'm not going to be checking outside my windows for murderbot AI drones quite yet 😁
It's the basic story from AI stop button/corrigibility in the video by Robert Miles, the hero of midwitted doomers.
More options
Context Copy link
More options
Context Copy link