The Government Accountability Office, in a report published Thursday, flagged potential privacy risks and challenges that a government-wide artificial intelligence guidance issued by the Office of Management and Budget failed to fully address.

As agencies accelerate the adoption of AI, addressing privacy risks and governance gaps is becoming increasingly critical. Join top government and industry leaders at the Potomac Officers Club’s 2026 Digital Transformation Summit on April 22 to explore how agencies can modernize securely while protecting sensitive data. Get your tickets today!
Table of Contents
What AI-Related Privacy Challenges Did GAO Find?
For the report, the congressional watchdog convened a panel of experts who identified 10 privacy challenges organizations face when deploying AI. While OMB’s AI guidance fully addressed two of these challenges, it only partially addressed the remaining eight.
Among the gaps, GAO found that OMB guidance did not clearly outline how agencies should evaluate and audit AI systems that process sensitive information or how to separate sensitive data from broader datasets used to train models. Experts also pointed to the lack of standardized performance metrics and incentives to encourage organizations to implement privacy protections.
What Actions Did GAO Recommend for OMB?
To address these concerns, GAO recommended that OMB expand its guidance by specifying known privacy-related risks agencies should consider when developing AI policies.
The watchdog also called on OMB leadership to issue additional guidance on best practices when evaluating and auditing AI models and separating sensitive data. It also suggested more guidance on establishing performance metrics, improving transparency around user consent and adding AI-specific considerations into privacy impact assessments.
What New AI Requirements Did OMB Release?
The report comes a few months after OMB issued a memorandum outlining requirements to ensure that AI systems procured by federal agencies adhere to “unbiased AI” principles.
The guidance directs agencies to collect key information from vendors to assess whether large language models meet these standards. Required disclosures include acceptable use policies, system and data cards, end-user resources and mechanisms for user feedback.
At the same time, OMB cautioned agencies against requiring vendors to disclose sensitive technical details, such as model weights, to balance transparency with intellectual property protections.
