FB pixel

Problem with police use of facial recognition isn’t with the biometrics

Washington Post investigation effectively says FRT doesn’t arrest people, police do
Problem with police use of facial recognition isn’t with the biometrics
 

A major investigation by the Washington Post has revealed that police in the U.S. regularly use facial recognition as the sole basis for making arrests, contravening a legal requirement for officers to have probable cause and corroborating evidence.

The Post’s findings, which also bring to light two previously unreported cases of people wrongfully arrested after being identified with facial recognition, highlight one major potential flaw in biometric technology for law enforcement use cases: police must be trusted to use it ethically.

And yet. “Law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used,” says the Post: “as a shortcut to finding and arresting suspects without other evidence.”

Journalists Douglas MacMillan, David Ovalle and Aaron Schaffer identified “75 departments that use facial recognition, 40 of which shared records on cases in which it led to arrests. Of those, 17 failed to provide enough detail to discern whether officers made an attempt to corroborate AI matches.”

Among the remaining 23 departments that had detailed records about facial recognition use, they found that “15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime.”

Moreover, “some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts.”

‘Automation bias’ is a problem; so is lax police work

The report breaks down police failures in the eight known wrongful arrests, which include failing to check alibis and blatantly ignoring suspects’ physical characteristics (the latter in the case of a pregnant woman). The trend is clear, and the Post suggests the examples are “probably a small sample of the problem.”

The piece comes dangerously close to missing its own point in quoting Katie Kinsey, chief of staff for the Policing Project at NYU School of Law, who notes that facial recognition software “performs nearly perfectly in lab tests using clear comparison photos,” but has not been subject to “real-world, independent testing of the technology’s accuracy in how police typically use it — with lower-quality surveillance images and officers picking one candidate from a list of possible matches.”

Because of this, Kinsey says, it’s hard to know how often the software gets it wrong.

Yet her blame is misplaced. As the Post investigation illustrates, it is not the biometric software that usually gets it wrong, but the police. The report notes research showing that “people using AI tools can succumb to ‘automation bias,’ a tendency to blindly trust decisions made by powerful software, ignorant to its risks and limitations.”

If anything, the software is too good at its job. Grainy suspect images run through facial recognition algorithms for photo lineups are highly likely to find people that look a lot like the suspect. In which case, says Gary Wells, a psychologist at Iowa State University who studies faulty eyewitness identifications, when those pictures are shown to victims, they are highly likely to make an ID, even if it is false.

AI to draft police reports not a good idea: ACLU

Solving the problem depends on the same key ingredients that underpin the larger global ecosystem of biometric technology: regulation and trust. And yet, who polices the police is a question that goes beyond biometrics.

A recent report from the ACLU notes that “police departments are adopting software products that use AI to draft police reports for officers” – and says that’s a very bad idea: “AI has many potential functions, but there is no reason to use it to replace the creation of a record of the officer’s subjective experience.”

Other organizations have raised concerns about the potential for civil and human rights violations in AI deployments, including biometric facial recognition, by the DEA and FBI.

And a 137-page federal joint agency report on law enforcement use of biometrics, published this month, offers biometric technology’s dual-edged implications, per the U.S. Department of Homeland Security (DHS), the Department of Justice (DOJ), and the White House Office of Science and Technology Policy.

In each case, technology is an enabler for human decisions. For biometric algorithms, there are standards, tests and certifications that govern their use. Regulating human behavior is much harder, especially in those who wield power. Algorithms have their flaws, but they are generally more predictable than people – and less likely to skip a step or two when someone’s freedom is on the line.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

5M Ugandans renews national IDs, 100k new cards ready for pickup

Uganda’s National Identification and Registration Authority (NIRA) has announced that more than 5.3 million people have already renewed their national…

 

States push ahead on AI regulation as Congress falters

This month, the U.S. Senate voted to strike down a controversial provision in President Trump’s sweeping tax and spending package…

 

Congress charts diverging paths on AI in FY 2026 defense bills

As the Pentagon races to integrate AI across all operations of each of the services, Congress is laying down distinct,…

 

UK Home Office working on rules for police use of LFR

The UK Home Office is in the process of crafting a governance framework for the use of live facial recognition…

 

Under-display 3D face biometrics reaches testing on Chinese smartphones: leaker

Apple users are accustomed to using the Face ID feature in their iPhones, but reports suggest a crop of Chinese…

 

New document reader from Regula reads cards on both sides simultaneously

Regula has released a new embedded document reader, the Regula 7223E. A release says the device, based on the desktop…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events