A senior defense official clarified that the Defense Advanced Research Projects Agency is prioritizing ethics and human guidance in a program that seeks artificial intelligence-based drones designed to distinguish enemies from civilians and allied troops in urban battles, Defense One reported Friday.
“We try to use the autonomy where appropriate, where suspicion is low and when suspicion increases, revert to a more human-in-the-loop mode,” said Lt. Col. Philip Root, program manager for DARPA’s Urban Reconnaissance through Supervised Autonomy program.
The reconnaissance program aims to build unmanned aerial systems that collect information about people in complex warfighting environments and help troops identify who is a threat. Root noted that the drones will only provide information, and the judgment on the person’s risk will still be handled by a human operator. He added that it will have legal, moral and ethical implications.
“We really want to try to ensure we allow non-hostiles, non-combatants, to move out of the way. Future urban conflict is going to take place in large cities where the population can’t just go to the mountains,” Root said.
Drones will spot unidentified individuals in the field by delivering a warning message and observing how a person responds. The system will then submit the information along with video and location data to an official who will help decide what to do about the situation. DARPA aims to begin testing the drones in 2021.