FB pixel

FBI’s AI, biometrics boom is accelerating, but paperwork isn’t keeping up

Categories Biometrics News  |  Law Enforcement
FBI’s AI, biometrics boom is accelerating, but paperwork isn’t keeping up
 

The Federal Bureau of Investigation (FBI) has rapidly expanded how it uses AI across investigations, intelligence support, and internal operations, more than doubling the number of AI use cases it reported in the Department of Justice’s (DOJ) latest public inventory.

DOJ’s 2025 AI Use Case Inventory – which aggregates AI systems across all DOJ components – lists 50 use cases attributed to the FBI.

The inventory is supposed to be a transparency mechanism, a standardized annual accounting of where AI is embedded in federal work and where it touches the public.

Transparency traces back to Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, issued on December 3, 2020, and subsequent federal policy that now requires agencies to identify AI systems, categorize their impact, and publish public-facing details.

But the FBI’s 2025 entries show a widening gap between the bureau’s fast-moving deployment of AI-enabled investigative tooling and the slower, thinner public documentation meant to help outsiders understand the stakes, the safeguards, and the failure modes.

That gap matters most where AI feeds law enforcement decisions.

The FBI’s 2025 inventory includes nine “high-impact” AI use cases, all categorized as deployed within law enforcement processes and all embedded in law enforcement processes such as biometric identification workflows, facial recognition, and automated analysis that can narrow suspects, prioritize leads, or convert large data flows into investigative direction.

Those are precisely the kinds of systems where false positives can push investigators toward the wrong person, where automation bias can harden hunches into “hits,” and where oversight is difficult once a tool becomes routine.

The DOJ inventory does not read like a single FBI AI program. It reads like an expanding ecosystem. The FBI’s 2025 use cases span classic machine learning, computer vision, and generative AI, with a heavy concentration on functions that compress time and labor, such as translation, transcription, summarization, triage, and identity matching.

Critics have argued that DOJ’s inventory reporting should not stop at labeling a tool as “summarization” or “data triage.” They want to know what data the model was trained on, what testing was conducted, what error rates look like in the FBI’s domain-specific conditions, and how the bureau checks for drift over time.

The FBI saw a jump from 19 AI use cases in 2024 to 50 in 2025, with 27 categorized under “law enforcement,” compared with 15 the year prior.

The inventory itself also reflects a DOJ-wide growth curve, expanding to 315 entries across the department, a 30.7 percent increase from the 2024 inventory, and DOJ says it consolidated widely used tools into department-wide entries to avoid duplication.

Consolidation can reduce the apparent number of entries for tools used across multiple components, which may obscure component-specific implementation differences.

Critics have argued that consolidation and vague labeling can also blur the operational boundaries of sensitive tools, making it harder to know what is truly new, what is being scaled, and what exactly is being deployed in the field.

Federal policy now treats high-impact AI as a special category that demands an explicit baseline of risk management.

Office of Management and Budget (OMB) Memorandum M-25-21 requires agencies to document implementation of minimum risk management practices for high-impact AI within 365 days of the memo’s issuance, and it directs agencies to discontinue use if a particular high-impact use case is not compliant.

Agencies are permitted to continue operating high-impact systems during the 365-day compliance window but must document and implement minimum safeguards before the deadline.

Because the memorandum is dated April 3, 2025, that compliance clock effectively points to an early April 2026 deadline. The problem flagged by stakeholders looking at the DOJ inventory is that the FBI’s high-impact deployments appear to be running ahead of those documented steps.

According to outside analysis of the DOJ inventory, none of the FBI’s deployed high-impact use cases had completed the required risk management steps, even as the April deadline approaches.

The FBI began five new AI projects intended to generate investigative leads using suggested facial biometric matches and other data, with four already deployed operationally, but the new entries provide no additional detail about risk management.

M-25-21 is explicit that if agencies lack access to source code, models, or underlying data, they are still expected to test through alternative methodologies, including systematic querying and structured evaluation.

A striking feature of the FBI’s high-impact entries is how often the underlying systems are purchased from vendors while vendor identities are not meaningfully disclosed publicly, sometimes appearing as generic labels rather than named products.

All but one of the high-impact systems appears to rely on vendor-built platforms.

That opacity is not new. A DOJ Office of Inspector General audit published in late 2024 found that the FBI cited vendor and commercial provider transparency as a barrier to AI adoption, warning that providers may embed AI capabilities that purchasers cannot verify without technical details that are not typically available to the bureau, and that the evolving policy landscape could create backlogs for AI approvals.

In other words, the FBI has long recognized the structural governance problem at the core of modern AI procurement.

The FBI’s inventory growth is inseparable from a continuing expansion of biometric capability. The bureau’s public-facing materials have long framed AI as a way to process scale, not as a machine that “decides” guilt.

On its own website, the FBI describes AI tools for vehicle recognition, video analytics, triage of voice samples for language identification, and converting speech to text, emphasizing that the bureau uses information generated from these techniques for investigative leads.

That language matters because it reflects the standard rhetorical boundary law enforcement agencies draw when challenged. They insist that AI does not “make the decision,” but merely “generates leads.”

Yet, the practical effect of lead generation depends on how those leads are handled inside an investigation and whether humans treat machine outputs as suggestions to be tested or as answers to be accepted.

The FBI’s Next Generation Identification (NGI) system is a concrete example of how AI becomes operationally consequential even when framed as advisory. The FBI describes NGI’s facial recognition search function as producing a ranked list of candidate photos when authorized law enforcement submits a probe image.

In practice, a ranked candidate list can narrow the field, shape investigative attention, and influence witness interactions, particularly when other evidence is thin or when the image quality is poor.

The wider policing landscape illustrates why safeguards matter. Investigations in recent years have documented wrongful arrests linked to facial recognition misidentifications and the dangers of agencies leaning too heavily on AI-derived matches.

The FBI’s NGI facial recognition policy explicitly states that facial recognition results are investigative leads only and not probable cause.

Not all of the FBI’s growth is about face biometric matching. The DOJ inventory reflects a broader shift toward AI tools that convert unstructured material into searchable and prioritizable forms, including transcription, translation, summarization, and synthesis.

These use cases can look innocuous because they resemble productivity software. But in investigative contexts, triage is power. If a tool summarizes interviews, clusters tips, flags sentiment, or extracts entities and locations, it can determine what gets read first and what gets read at all.

If it is wrong, it can silently route attention away from key details. If it is biased, it can repeatedly push investigators toward the same kinds of targets.

This is one reason critics argue that DOJ’s inventory reporting should not stop at labeling a tool as “summarization” or “data triage.” They want to know what data the model was trained on, what testing was conducted, what error rates look like in the FBI’s domain-specific conditions, and how the bureau checks for drift over time.

Observers have argued that DOJ’s inventory transparency can lag even the Department of Homeland Security (DHS), an agency not known for expansive public disclosure.

DHS’s AI use case inventory process, while uneven across components, has become a point of comparison precisely because it shows what “more detail” can look like in federal reporting, even inside a law enforcement and security environment.

DOJ, for its part, argues that it has pursued rigorous collection and review to improve quality, comprehensiveness, and transparency, and it notes that some information is withheld under FOIA-like standards and other restrictions.

The coming months will test whether the federal AI governance framework has teeth when applied to deployed law enforcement AI. M-25-21 is not subtle about its intent.

High-impact AI is supposed to be governed through documented practices, and noncompliance is supposed to trigger safe discontinuation until the minimum practices are met.

What the FBI’s 2025 inventory shows is a bureau that is scaling AI quickly in precisely the areas that generate investigative leverage, particularly biometrics and data triage, while public reporting still struggles to answer the most basic governance questions.

The FBI has strong institutional reasons to pursue these tools. Its own public materials describe modern investigations as data-intensive and time-sensitive, and AI as a way to process video, audio, and text at speed.

But the central issue is not whether the bureau should use AI at all, but whether the public, Congress, and the courts can evaluate how those systems behave before they become normalized infrastructure.

The inventory was designed to make that evaluation possible. The FBI’s surge in reported use cases suggests the bureau is moving fast. The question now is whether the risk management and transparency apparatus built to govern high-impact AI will move fast enough to matter.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

EU Commission doubtful all member states will be able launch EUDI wallets this year

Europe is hurtling toward the age of digital wallets, but much is still unknown. “In early 2026, no EUDI Wallet…

 

Shift to SSI could preserve security of India’s digital ecosystem at scale

The Data Security Council of India (DSCI) and the Digi Yatra Foundation have released a joint paper that argues for…

 

Idex loses NOK 90M ID Centric investment, turns to smaller share sale

Idex Biometrics is considering a private placement for 10 percent of its shares to replace a canceled deal. A proposed…

 

US bill would require warrants for digital surveillance, biometric searches

A House bill introduced by Reps. Thomas Massie and Lauren Boebert would impose a broad warrant requirement on government searches…

 

Massachusetts police share fingerprint data with ICE despite limits, report says

A new report from Citizens for Juvenile Justice (CJJ) says Massachusetts police departments, sheriffs, courts, and other justice system actors…

 

IAM’s adaptation for AI agents drives M&A deals for Silverfort, iC Consult

Digital identity security firm Silverfort has acquired AI-native identity security provider Fabrix Security to deliver autonomous identity security at runtime….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events