Artificial intelligence. NIST launched the AI Agent Standards Initiative
The AI Agent Standards Initiative will support establishment of industry-led protocols, security research and trusted adoption of agentic artificial intelligence technologies.
//

New NIST Initiative Aims to Promote Secure, Interoperable Agentic AI Systems

2 mins read

The National Institute of Standards and Technology’s Center for AI Standards and Innovation has introduced a new initiative to ensure the interoperable and secure adoption of artificial intelligence agents, or systems capable of autonomous decision-making.

NIST said Tuesday that the AI Agent Standards Initiative aims to strengthen U.S. dominance in AI.

New NIST Initiative Aims to Promote Secure, Interoperable Agentic AI Systems

The Potomac Officers Club will host multiple panel discussions on federal AI, including how to integrate the technology into legacy systems and establish a security foundation for a mission-ready AI, at the 2026 Artificial Intelligence Summit on March 18. Learn directly from top government and industry officials about new strategies and use cases. Attendees will also get the opportunity to network and forge new partnerships. Get your tickets today.

What Is the AI Agent Standards Initiative?

Under the initiative, CAISI will work with NIST’s Information Technology Laboratory, the National Science Foundation and other federal partners to advance three priorities:

  • Promoting industry-led AI agent standards and U.S. leadership in international standards bodies
  • Supporting community-driven open-source protocol development for AI agents
  • Advancing research on AI agent security and identity

NIST plans to issue research, guidance and additional deliverables as part of the initiative in the coming months. Stakeholders are encouraged to provide input through CAISI’s request for information on security threats, technical safeguards and assessment methods tied to agentic AI. Responses to the RFI are due March 9. Interested parties may also submit their input through the ITL’s concept paper on AI agent identity and authorization until April 2.

What Security Risks Are Associated With Agentic AI?

The Information Technology Industry Council previously raised concerns about vulnerabilities tied to agentic AI.

In a paper published in November, ITI identified jagged intelligence as a key risk, noting that highly capable models can fail unpredictably at basic tasks, potentially causing cascading errors in automated workflows. Agentic systems are also susceptible to prompt injection, data poisoning and unauthorized tool access, which could allow malicious actors to manipulate automated agents.

ITI is urging policymakers to adopt risk-based, context-specific governance approaches, strengthen transparency and data protections, and encourage industry-led standards to support secure and responsible deployment of agentic AI.