FB pixel

Key privacy gaps in Washington’s AI policy not addressed, audit find

Growing tension in AI strategy as agencies are encouraged to accelerate adoption before privacy framework is completed
Key privacy gaps in Washington’s AI policy not addressed, audit find
 

A new Government Accountability Office (GAO) report warns that the federal government’s central AI guidance still leaves important privacy gaps unresolved, even as agencies increasingly deploy AI in ways that touch large stores of sensitive personal data.

The audit report says the Office of Management and Budget’s (OMB) broad government-wide AI guidance that was issued on April 3, 2025 does not fully address the privacy risks and operational challenges agencies face when using AI systems.

The report comes as federal agencies are under growing pressure to adopt AI tools for operational efficiency, public services, and internal decision support.

Indeed. GAO’s findings land awkwardly beside the Trump administration’s broader AI posture, which has so far leaned heavily toward speeding adoption and stripping away what it views as barriers to innovation.

In January 2025, President Donald Trump issued Executive Order 14179 directing the government to revoke prior AI policies seen as obstacles to American AI leadership, and the White House later framed its AI agenda around removing “onerous federal regulations” and encouraging faster deployment.

OMB’s April 2025 AI guidance likewise emphasizes innovation and responsible adoption, but GAO found that, for all that forward momentum, the administration’s government-wide guidance still does not fully spell out the known privacy risks agencies should be weighing when they use AI on sensitive data.

That contrast is important. The administration has made clear that it wants a lighter-touch, pro-deployment AI framework, but GAO is effectively warning that faster adoption without more explicit privacy guardrails leaves agencies to navigate some of the hardest questions on their own.

Congress’ watchdog found that OMB fully addressed only two of ten selected privacy-related challenges identified by GAO’s expert panel, while partially addressing the other eight.

In that sense, the report does not reject the administration’s push to modernize government with AI. It argues that, if Washington is going to move quickly, it also needs more precise privacy guidance to keep that acceleration from outrunning the safeguards meant to protect the public’s personal information.

GAO noted that AI is already being used across government in practical ways, pointing to examples such as a Department of Health and Human Services chatbot helping security teams answer routine questions, and AI tools at NASA that assist with scientific targeting by planetary rovers.

At the same time, however, GAO said those potential benefits do not erase the risks that come with using AI on systems containing personally identifiable information and other sensitive data.

To examine the issue, GAO convened a three-day virtual panel in January 2025 with 12 experts drawn from federal, industry, nonprofit, and academic backgrounds.

The agency used that discussion to develop a non-exhaustive list of privacy risks and challenges associated with AI, then compared those findings with existing OMB guidance, including Memorandum M-25-21 on accelerating federal AI use, Memorandum M-25-22 on AI acquisition, Memorandum M-25-05 on open government data access and management, the 2019 evidence-based policymaking guidance, and OMB Circular A-130 on managing information as a strategic resource.

GAO said the experts identified a range of privacy risks that go beyond ordinary software compliance concerns. One example highlighted in the report is that AI can reveal sensitive information embedded in raw data sets, potentially exposing private information in ways agencies may not anticipate.

This is known as de-anonymization. It occurs when data that has been stripped of direct identifiers is combined with other datasets to re-identify individuals. The process relies on quasi-identifiers like location, gender, birth date, or device IDs, which, when cross-referenced with other information, can uniquely identify people. For example, a combination of birth date, gender, and ZIP code can be enough to pinpoint an individual, even in anonymized datasets.

More broadly, GAO stressed that AI systems are often difficult to understand, can behave unpredictably as data changes over time, and are shaped not only by technical design but also by the social context in which they are deployed.

That combination makes privacy protection more difficult than simply applying traditional information security rules to a new class of software.

The report also says federal agencies face practical hurdles in trying to protect privacy while using AI.

Among the challenges identified by the expert panel were auditing and evaluating AI models that rely on sensitive information, separating sensitive data from products and systems in which it is deeply embedded, the lack of clear best practices for mitigating AI privacy risks, and the lack of performance metrics and incentives for organizations to adopt robust privacy practices.

The panel also identified broader issues such as weak public AI literacy, limited access to privacy-protective technology, insufficient transparency about how sensitive data is used in AI systems, and tradeoffs between system performance and privacy protections.

GAO’s central finding is that OMB’s guidance does not fully cover that landscape. Of the ten expert-identified privacy-related challenges GAO determined were within OMB’s ability to address through guidance, the watchdog found OMB had fully addressed only two and partially addressed the remaining eight.

The only two challenges GAO considered fully addressed were the lack of skills among the federal workforce to implement AI while mitigating privacy risks and the scalability of implementing AI systems with privacy protections. All the other issues were only partially addressed.

On workforce skills, GAO said OMB’s M-25-21 memo directs agencies to build foundational knowledge on responsible AI use, prioritize recruiting and retaining technical AI talent, and make use of training programs and hands-on expertise.

On scalability, GAO said the same memo instructs agencies to assess AI maturity goals and resource areas such as data governance, infrastructure, privacy, and security so AI systems can be deployed more broadly without losing basic safeguards.

Those steps, GAO said, are meaningful if effectively implemented. But the watchdog report says the larger weakness is that OMB’s AI guidance does not specify the types of known privacy-related risks agencies should be considering when they update their AI-related policies.

GAO said that omission matters because AI systems pose risks that are in some ways distinct from traditional software or information systems.

The watchdog pointed to the National Institute of Standards and Technology’s (NIST) observation that AI systems may be trained on data that shifts significantly and unexpectedly over time, affecting trustworthiness in ways that can be hard to detect.

Without more explicit direction, GAO said, agencies may not recognize the range of privacy harms that AI can introduce when they conduct privacy impact assessments or design mitigation strategies.

The report situates those concerns within a broader federal legal and policy framework that already includes the Privacy Act of 1974, the E-Government Act’s privacy impact assessment requirements, OMB Circular A-130, the Federal Privacy Council structure established by executive order, and NIST privacy and security guidance.

In other words, GAO is not arguing that the federal government lacks any privacy framework at all. Rather, it argues that the framework has not been translated into sufficiently specific, actionable guidance for AI use across agencies.

GAO made two recommendations to OMB. First, it said OMB should update or supplement its guidance so agencies are better informed about the kinds of privacy risks that can arise from AI use.

Second, GAO said OMB should use existing inter-agency bodies, including the Chief AI Officer Council or the Federal Privacy Council, to facilitate information sharing about strategies and best practices for addressing AI-related privacy challenges.

The report says those forums could help agencies with less experience learn from others and better understand the privacy implications of AI systems before problems occur.

OMB did not provide comments on the draft audit report, according to GAO. That leaves the report as both a critique of the current state of federal AI governance and a roadmap for what GAO believes is still missing.

As federal agencies continue adopting AI in areas that involve benefits, health, employment, security, and other sensitive government functions, the watchdog’s warning is that the pace of implementation is outstripping the precision of the privacy guidance meant to govern it.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

ID4Africa speakers urge legal identity inclusion for refugees, stateless persons

African governments must accelerate efforts to provide legal and digital identity to refugees and stateless populations, according to speakers at…

 

Biometrics lawyer Dan Saeedi talks BIPA on Biometric Update Podcast

Dan Saeedi is a BIPA buster. The renowned Chicago attorney, CIPP/US,a partner and team co-lead of the biometric privacy team…

 

World Bank, African DPAs outline formula for trusted digital identity, DPI

Trust has moved steadily to the center of the conversation around digital public infrastructure and identity at ID4Africa, and the…

 

UK watchdog warns of legal risks as London police deploy LFR at protest

London’s Metropolitan Police will deploy live facial recognition (LFR) technology at a protest for the first time this weekend, prompting…

 

Age assurance debate arrives in Bangladesh

The dominos continue to fall in the game of global online safety legislation targeting social media platforms. Bangladesh is weighing…

 

Et tu, browser? Security experts ring bell over browser fingerprinting

Your web browser wants you to think it’s on your side. It’s your helpful window into the online universe, and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events