Auditor demands better accounting from IRS for biometric identity proofing

The U.S. Internal Revenue Service’s (IRS) effort to modernize digital identity verification stands at the intersection of fraud prevention, AI governance, and public trust, revealing both significant progress and pressing oversight challenges. Over the last decade, the agency has faced an uphill battle against identity theft and refund fraud that once siphoned billions annually from the Department of Treasury.
In response, IRS implemented a digital identity proofing strategy centered on biometric technologies, particularly facial recognition, through its Secure Access Digital Interface (SADI) and a partnership with the private credential provider ID.me. SADI utilizes the National Institute of Standards and Technology (NIST) Special Publication 800-63-3 compliant credential service provider technology to enable people to securely access and use IRS online tools and applications.
While verification pass rates have more than doubled since this shift, from 30 percent to 40 percent under outdated password systems, to over 70 percent with biometrics, the program’s opaque governance structure and overreliance on a single vendor have raised concerns.
The digital identity landscape for the IRS transformed dramatically after 2020. Spurred by pandemic-era demands and the urgency of distributing benefits like the advance Child Tax Credit, the IRS accelerated deployment of biometric ID verification through an existing Treasury contract with ID.me.
By 2022, the agency was requiring biometric identity assurance at NIST Identity Assurance Level 2 across nearly three dozen applications. Users submitted selfies and driver’s license photos for comparison by facial recognition algorithms, with fallback live video chat sessions for those unable or unwilling to use the automated path. The result was a dramatic increase in access, particularly for underserved populations previously locked out by legacy systems that relied on memory-based security questions.
Jay McTigue, director of strategic issues at the Government Accountability Office (GAO), noted in a recent interview with Federal News Network that while performance has improved and taxpayer access has expanded, the IRS’s oversight of this system remains insufficient. A June GAO audit had confirmed this.
Among the most significant shortcomings is that the IRS has not established measurable goals or outcome-based benchmarks for ID verification across its many digital services, which means the agency cannot determine whether a 70 percent success rate is sufficient or where usability gaps persist for vulnerable users.
“What we did find is that while ID.me provides a lot of data to IRS, IRS could do a better job of actually managing and using the data that’s being provided by ID.me,” McTigue said. “For example, this identity verification is used across roughly 30 or three dozen different applications that various taxpayers can use. And so, while ID.me is providing data, IRS needs to look at the different applications and set goals and identify the objectives for each one of those interactions so that IRS can determine whether or not a 70 percent pass rate is good.”
Continuing, McTigue added, “You know, maybe that’s okay for a given application, but maybe it should be closer to 95 percent for some other applications, or maybe some other number might be appropriate. So, first and foremost, IRS needs to identify goals and objectives for the different applications. Then, once they have those kinds of data, they need to look at the data and evaluate or assess whether or not it’s meeting the results and outcomes that IRS wants for the given application, the given set of taxpayers or others using the service.”
Another structural weakness lies in data governance. Though ID.me submits regular performance data — ranging from true pass rates to user abandonment points — the IRS lacks documented procedures to evaluate or share this information among relevant offices. As a result, cybersecurity teams, procurement staff, and program managers may be working from incomplete or inconsistent information, hindering corrective actions. GAO said it found no formal mechanisms for internal data dissemination, further limiting oversight capacity.
A third layer of concern centers on the use of AI. ID.me’s identity verification process relies heavily on AI-powered facial recognition. But the IRS failed to list these tools in its official AI inventory, violating transparency mandates under Executive Order 13960 and the Advancing American AI Act, which was implemented as part of the National Defense Authorization Act for Fiscal Year 2023.
The IRS also neglected to evaluate ID.me’s AI tools through its internal AI governance framework, according to GAO. This is more than a procedural oversight. The use of opaque algorithmic decision-making in high-stakes government identity verification raises risks of error, bias, and lack of recourse for users denied access to critical services.
GAO’s audit also questioned the IRS’s dependence on a sole provider for such a sensitive function. Though ID.me was the only vendor able to meet federal standards under the urgent timelines of 2020, that exclusivity now creates systemic vulnerabilities. Any failure, breach, or undisclosed flaw in ID.me’s system could disrupt access for millions. While ID.me touts its compliance and its own performance successes, GAO said that evaluation of outcomes must not rely exclusively on the vendor’s self-assessments.
Oversight is further weakened by contractual shortcomings. The IRS’s arrangement with ID.me was executed under a Treasury-run blanket purchase agreement with software reseller V3Gate. While expedient, this allowed the IRS to bypass certain steps in establishing its own performance evaluation plan.
And though the contract includes important privacy safeguards such as requiring deletion of biometric data within 48 hours and chat transcripts within 30 days, the IRS relies heavily on ID.me’s self-attestation for enforcement. GAO concluded that without audits or independent verification of these claims, the IRS cannot guarantee compliance.
The risks are not hypothetical. As of the 2025 filing season, IRS flagged over 2.1 million returns as potential identity theft, requiring affected taxpayers to undergo authentication. In many cases, delays in verifying identity have led to multi-month or even multi-year waits for refunds.
“In these cases, the IRS sent a letter to taxpayers notifying them they had to authenticate their identities before receiving their refunds,” National Taxpayer Advocate Erin M. Collins told Congress, adding that “the IRS typically takes several months to resolve these cases.”
At last count, nearly 400,000 cases remained pending within the IRS’s Identity Theft Victim Assistance (IDTVA) unit, with resolution averaging 20 months. These prolonged delays disproportionately affect low-income filers, almost 70 percent of whom earn less than 250 percent of the federal poverty level. While the IRS has promised improvements, GAO and IRS’s Collins have called for urgent reforms, including a reduction in IDTVA case times to four months.
There is a broader lesson here for the federal government. As McTigue noted, many agencies – from the Social Security Administration to those administering grant programs – depend on identity verification to deliver services securely. The IRS’s experience offers a playbook for what to emulate and what to avoid.
On the one hand, digitization and AI-enabled ID proofing have dramatically improved access and efficiency. On the other hand, their unchecked deployment, especially through third-party providers, risks introducing new forms of exclusion, opacity, and potential abuse.
GAO issued four key recommendations: establish outcome-based performance goals for each IRS application using digital identity proofing; conduct systematic evaluations of program effectiveness; formalize procedures for sharing performance data across IRS units; and ensure compliance with AI transparency and inventory requirements.
IRS agreed to implement all of the recommendations.
The challenge now is execution. Technology modernization cannot be an end in itself. If the IRS is to fulfill its dual mission of taxpayer service and fraud prevention, it must anchor its digital ID strategy in transparency, oversight, and user-centric design.
Biometric tools like facial recognition, if poorly governed, can replicate or deepen systemic inequalities. But with clear goals, privacy-respecting implementation, and accountability, they can also deliver more secure and equitable access to government services.
The success of the IRS’s digital identity program, and of similar efforts across the federal government, will ultimately depend not on how futuristic the technology is, but on how well its use is governed.
Article Topics
digital identity | fraud prevention | GAO (Government Accountability Office) | ID.me | identity proofing | identity verification | IRS | NIST Special Publication 800-63 | U.S. Government







Comments