NIST Publishes Guidance for Mitigating Adversarial ML Threats
/

NIST Publishes Guidance for Mitigating Adversarial ML Threats

1 min read

The National Institute of Standards and Technology has issued a document that identifies threats associated with adversarial machine learning. The Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, published Monday, collates common tactics used against artificial intelligence, the agency said. 

Securing AI

The new document provides guidance to individuals and groups involved in the design, development, deployment and management of AI. It seeks to address adversarial machine learning, or AML, threats by empowering organizations that use AI to identify and mitigate an attack. 

NIST dedicates sections of the document to evasion, poisoning, privacy and misuse attacks of predictive and generative AI. Evasion, poisoning, privacy and misuse are among the most widely studied risks associated with AML. 

In addition, the guidance aims to standardize AML-related concepts and keywords used across technology communities. 

NIST said the document was developed with contributions from experts in AML. The agency continues to work with domestic partners and in the United Kingdom to update the manual as new information emerges.