President Donald Trump has directed all federal agencies to immediately cease use of Anthropic’s artificial intelligence technology, escalating a high-stakes dispute between the Pentagon and the AI firm over military guardrails.
“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it and will not do business with them again!” Trump wrote Thursday on Truth Social.
The order includes a six-month phase-out period for the Department of War and other agencies currently using Anthropic’s Claude models, according to coverage by The Wall Street Journal.
The decision follows weeks of mounting tension between the Pentagon and Anthropic over whether the company would lift safeguards limiting how its models can be used in military contexts.
For the latest intel on how the Department of War is using AI, attend Chief Digital and AI Officer Cameron Stanley’s keynote at Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18!
Table of Contents
Why Did the Pentagon Move Against Anthropic?
At the center of the dispute is a Pentagon demand that Anthropic allow its AI models to be used for all lawful military purposes, without restrictions embedded in the company’s terms of service.
Anthropic CEO Dario Amodei publicly rejected what the War Department described as its “best and final offer,” stating that the company could not agree to remove guardrails related to mass domestic surveillance and fully autonomous weapons systems.
According to Axios, the Pentagon plans to designate Anthropic as a “supply chain risk,” a label typically applied to companies tied to adversarial nations. The move would sever a contract reportedly valued at up to $200 million and bar Anthropic from future government work. Contractors working with the War Department may also be required to certify that they are not using Claude in covered workflows.
Defense officials have argued that once the military procures a tool, it must retain full discretion over lawful use cases without negotiating operational boundaries with a private vendor.
The clash intensified amid broader national security sensitivities following the capture of Nicolás Maduro, an operation in which Claude was reportedly used within classified systems.
What Does This Mean for Military AI Operations?
Claude is currently the only AI model deployed in the military’s classified systems, according to Axios. Defense officials acknowledged disentangling it will be complex, particularly as AI has become embedded in sensitive planning and operational workflows.
The phase-out raises immediate questions about replacement providers. Elon Musk’s xAI recently signed an agreement enabling the use of its Grok model in classified environments, though sources told Axios it may not serve as a like-for-like substitute. Google’s Gemini and OpenAI’s ChatGPT are available in unclassified systems, and discussions are underway about expanding access in classified settings.
The decision also complicates matters for Palantir, which integrates Claude into some of its defense work and may now need to pivot to alternative models.
In a statement, Amodei said that if the Pentagon chooses to offboard Anthropic, the company will work to ensure a smooth transition and avoid disruptions to ongoing military operations.

