Robotics are creeping into the battlefield – so far with “human in the loop” controls, but we are fast approaching the point where autonomous lethal decisions on the battlefield might need to be made by the equipment itself. (Ballistic Missile Defense and Counterbattery come to mind, indeed, the MK15 Phalanx could be considered an autonomous robotic defense system already.)
This opens a case full of cans of worms, and it was a subject of consideration at this year’s Artificial General Intelligence conference. As the electronic battlefield evolves, so too will the equipment used, and at some point the speed and accuracy of machines will find “human in the loop” to be an impediment. As we cross that barrier it behooves us to pay attention to conventions, as in the Geneva Convention, as well as rules of engagement, and how those form the programming and controls of battlefield robots.
This is one of those lightning fast presentations where they were trying to make up time at the conference, and much is left out. Please do pay attention however, every two sentences covers a few varied topics that could be papers, presentations, and whole fields of study in themselves. There’s a lot of food for thought here, so please pay close attention, and then go here to read the paper.