A bipartisan pair on the Senate Intelligence Committee is pushing new legislation that would require the National Security Agency to build a security framework for sensitive artificial intelligence systems. Sens. Todd Young, R-Ind., and Mark Kelly, D-Ariz., introduced the Advanced Artificial Intelligence Security Readiness Act, directing the NSA to identify vulnerabilities across cutting-edge AI and issue guidance to guard against foreign theft, sabotage and espionage.

The push to harden America’s most sensitive AI systems underscores how quickly the AI threat landscape is evolving. Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 19 will bring together the federal and defense leaders shaping AI safeguards to discuss emerging risks, strategic priorities and real-world applications. Register now to join the conversation.
Table of Contents
What Would the NSA Be Required to Produce?
The bill tasks the NSA’s Artificial Intelligence Security Center with developing a governmentwide security playbook that details risks in model development, training environments and the broader AI supply chain. The guidance must map out AI-unique attack surfaces, recommend protections for model weights and architectures, and outline strategies to prevent foreign penetration of advanced systems.
Young said continued U.S. leadership depends on ensuring critical technology cannot be stolen or compromised, while Kelly warned that AI underpins defense, intelligence, infrastructure and economic competitiveness, making vulnerabilities in these systems a national risk.
How Would the Guidance Be Developed?
The legislation directs the NSA to draw on a wide scope of expertise. The agency would consult with subject matter experts, national laboratories, federally funded research centers and relevant federal departments, including the Department of Commerce’s Bureau of Industry and Security and the Departments of Homeland Security and Defense. Required activities include expert interviews, roundtable discussions, facility visits and assessments of industry frameworks on AI security and model scaling.
The guidance must help operators identify, protect, detect, respond to and recover from cyber intrusions aimed at advanced AI and its supporting supply chains.
What Reporting Would Congress Receive?
The bill mandates two reports to the congressional intelligence committees: the first is due 180 days after enactment and the second is due one year later. Each report must include an unclassified and publicly available version to support adoption across industry and research institutions.
Which AI Systems Would Fall Under the Bill?
Covered technologies include advanced models whose capabilities could cause severe national security harm if stolen. That includes systems that can match or surpass human experts in sensitive areas such as cyber operations; chemical, biological, radiological and nuclear analysis; autonomy; persuasive communication; or self-directed improvement.
Companion legislation has been introduced in the House.
