OpenAI said it has reached an agreement with the Department of War to deploy advanced artificial intelligence models on classified networks, outlining what it described as a multi-layered safeguard structure governing the use of its systems.
The company made the announcement on Saturday after President Donald Trump directed federal agencies to cease using Anthropic’s AI tools and War Secretary Pete Hegseth, a 2026 Wash100 Award recipient, indicated Anthropic would be designated a “supply chain risk.”

The Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18 will convene senior defense, intelligence and industry leaders to examine how advanced AI capabilities are being deployed across government environments. Discussions will focus on responsible implementation, security considerations and scaling AI tools to support mission execution. Register now.
OpenAI said the agreement allows deployment of its models in classified environments while maintaining three core “red lines:” no use of its technology for mass domestic surveillance, directing autonomous weapons systems and high-stakes automated decision-making without human approval.
In a post on social media platform X, Sam Altman, CEO of OpenAI, said the DOW “agrees with these principles.” He noted that the DOW “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
Table of Contents
What Guardrails Govern the Classified OpenAI Deployment?
OpenAI said the deployment will be cloud-only and will retain the company’s full safety stack, including technical safeguards designed to prevent misuse.
It reiterated that it will not provide models without guardrails or deploy systems on edge devices, citing concerns that edge deployment could create pathways for autonomous lethal weapon use. Cleared forward-deployed engineers and safety researchers will remain involved in the deployment to help ensure compliance with agreed-upon safeguards.
The contract language, according to OpenAI, permits lawful military use consistent with U.S. law and DOW policy and references existing legal authorities governing intelligence activities, including restrictions on domestic surveillance and use involving U.S. persons.
What Happened With Anthropic?
Anthropic CEO Dario Amodei rejected what the department described as its “best and final offer,” citing concerns about removing guardrails related to domestic surveillance and fully autonomous weapons systems.
OpenAI said it does not believe Anthropic should be designated a supply chain risk. According to Altman, his company asked the DOW to offer the same contractual framework to all AI companies.
“We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements,” he wrote.
How Does This Fit Into the Pentagon’s Broader AI Rollout?
The classified agreement follows the DOW’s recent partnership with OpenAI to integrate a custom version of ChatGPT into the GenAI.mil enterprise AI platform for unclassified work.
According to the department, GenAI.mil has surpassed 1 million unique users within its first two months and provides access to large language models across the agency’s approximately 3 million personnel.
