ITI's logo. ITI offered policy recommendations for managing agentic AI risks in a new paper
The Information Technology Industry Council published a new paper discussing the vulnerabilities associated with agentic AI and offering policy recommendations to guide the technology's adoption.
/

ITI Urges Policy Action to Address Agentic AI Risks, Strengthen Governance

3 mins read

The global technology trade association the Information Technology Industry Council, a.k.a. ITI, is calling for proactive policy measures to address the vulnerabilities of agentic AI, a new class of artificial intelligence that operates autonomously with multi-step reasoning and planning.

While the technology promises to boost productivity and improve cybersecurity efficiency, ITI warned in a new paper that it also introduces new risks. Titled Understanding Agentic AI, the paper also offers policy recommendations to reduce the risks associated with agentic AI.

ITI Urges Policy Action to Address Agentic AI Risks, Strengthen Governance

AI is becoming an integral part of how information is processed and utilized in government and the military. At the Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 19, you can learn directly from top voices from the public and private sectors how AI, machine learning and automation are transforming the government contracting industry and the world. Sign up for the highly anticipated event today.

“Agentic AI represents an evolution in how AI technology supports people, organizations, and business processes across the board,” said Courtney Lang, vice president of policy at ITI. “We’re still learning about the impacts of agentic AI, and that’s why dialogue between industry and government is so critical. ITI’s Understanding Agentic AI helps inform this crucial conversation by breaking down the technology and offering initial policy considerations to support the responsible development and adoption of agentic AI systems.”

What Are the Vulnerabilities Associated With Agentic AI?

The paper identifies jagged intelligence as a potential risk. Although agentic AI is trained on huge amounts of data, highly capable models fail unpredictably in basic tasks. Jaggedness could lead to cascading errors in automated workflows.

Agentic AI systems, like other AI systems, are vulnerable to a wide range of exploits. The technology can become targets of prompt injection, data poisoning and unauthorized tool access, potentially allowing attackers to manipulate automated agents.

Its ability to make autonomous decisions also raises challenges for accountability when outcomes cause harm or deviate from human intent. The paper also warns about automation bias, where users develop excessive trust in models, leading to poor decision making and loss of essential skills.

What Policies Can Promote Responsible Development & Deployment of AI?

ITI recommends a risk-based, context-specific regulatory approach that adapts existing AI frameworks, such as the National Institute of Standards and Technology’s AI Risk Management Framework, to agentic systems.

Policymakers are urged to:

  • Establish national AI strategies to guide responsible adoption.
  • Enforce transparency throughout the AI value chain.
  • Implement privacy-enhancing technologies and data governance standards.
  • Encourage open, industry-led protocols for interoperability and security.
  • Support workforce readiness and cross-sector collaboration to assess emerging risks.