The U.S. Army has launched a research effort to prevent “backdoor attacks” from adversaries that use machine learning and artificial intelligence algorithms to mistrain technologies like facial recognition platforms, FedScopp reported Tuesday.
The Army Research Office has awarded $60,000 to a Duke University research team to study and develop defensive software for AI databases to prevent “data poisoning” or AI mistraining.
“People tend to modify the input data very slightly so it is not so obvious to a human eye, but can fool the model,” noted Helen Li, a member of the Duke team.
“The fact that you are using a large database is a two-way street,” said MaryAnne Fields, program manager for intelligent systems at ARO. “It is an opportunity for the adversary to inject poison into the database.”
The team used a facial recognition database comprised of 12,000 images to test new algorithms intended to detect backdoors for adversaries that want to infiltrate the data.