The General Services Administration has proposed new terms and conditions for artificial intelligence systems that would require vendors selling AI technology to the federal government to grant agencies broad usage rights and meet neutrality standards for system outputs.

As federal agencies move to strengthen oversight and procurement rules for AI technologies, conversations about how government acquires and deploys AI continue to gain momentum across the public sector. The 2026 Artificial Intelligence Summit on March 18 will bring together experts to discuss the evolving AI landscape. Register now to save your spot!
The draft guidance from GSA’s Federal Acquisition Service outlines contract provisions for AI models, services and related tools acquired through federal procurement channels. Comments on the draft are due March 20.
The proposal comes as the Trump administration ordered federal agencies to stop using Anthropic’s Claude AI and begin phasing out the technology amid a dispute with the Department of War over restrictions on how the system could be used.
GSA has also terminated Anthropic’s OneGov agreement, with Federal Acquisition Service Commissioner Josh Gruenbaum, a previous Wash100 awardee, saying the move ends the company’s availability to the executive, legislative and judicial branches through GSA’s pre-negotiated contracts, according to a report by Reuters.
Table of Contents
What Rights Would the Government Have to Use AI Systems?
Under the proposal, contractors would grant the U.S. government an irrevocable license to use AI systems delivered under federal contracts.
The license would allow agencies to use the technology for any lawful government purpose, preventing vendors from imposing contractual or technical restrictions on legitimate federal use. The provision is designed to ensure agencies retain flexibility to deploy AI capabilities across missions and programs once the government acquires the technology.
What Neutrality Requirements Would Apply to AI Outputs?
The proposed terms also establish neutrality requirements governing how AI systems generate outputs for federal agencies.
Contractors must ensure their systems do not intentionally encode partisan or ideological judgments in AI-generated outputs and that the systems produce objective responses when used in government contexts, according to the draft guidance.
What Are the Other AI Procurement Requirements Proposed in the Draft?
The proposal also outlines transparency and disclosure requirements for AI vendors seeking federal contracts. Contractors would need to disclose information about model training methods, system limitations and whether models were modified to comply with non-U.S. regulatory frameworks.
The draft also calls for safeguards to protect government data and limits vendors’ use of federal data for model training without authorization, part of GSA’s effort to strengthen oversight of AI technologies used across federal agencies.
