FB pixel

Problem with police use of facial recognition isn’t with the biometrics

Washington Post investigation effectively says FRT doesn’t arrest people, police do
Problem with police use of facial recognition isn’t with the biometrics
 

A major investigation by the Washington Post has revealed that police in the U.S. regularly use facial recognition as the sole basis for making arrests, contravening a legal requirement for officers to have probable cause and corroborating evidence.

The Post’s findings, which also bring to light two previously unreported cases of people wrongfully arrested after being identified with facial recognition, highlight one major potential flaw in biometric technology for law enforcement use cases: police must be trusted to use it ethically.

And yet. “Law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used,” says the Post: “as a shortcut to finding and arresting suspects without other evidence.”

Journalists Douglas MacMillan, David Ovalle and Aaron Schaffer identified “75 departments that use facial recognition, 40 of which shared records on cases in which it led to arrests. Of those, 17 failed to provide enough detail to discern whether officers made an attempt to corroborate AI matches.”

Among the remaining 23 departments that had detailed records about facial recognition use, they found that “15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime.”

Moreover, “some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts.”

‘Automation bias’ is a problem; so is lax police work

The report breaks down police failures in the eight known wrongful arrests, which include failing to check alibis and blatantly ignoring suspects’ physical characteristics (the latter in the case of a pregnant woman). The trend is clear, and the Post suggests the examples are “probably a small sample of the problem.”

The piece comes dangerously close to missing its own point in quoting Katie Kinsey, chief of staff for the Policing Project at NYU School of Law, who notes that facial recognition software “performs nearly perfectly in lab tests using clear comparison photos,” but has not been subject to “real-world, independent testing of the technology’s accuracy in how police typically use it — with lower-quality surveillance images and officers picking one candidate from a list of possible matches.”

Because of this, Kinsey says, it’s hard to know how often the software gets it wrong.

Yet her blame is misplaced. As the Post investigation illustrates, it is not the biometric software that usually gets it wrong, but the police. The report notes research showing that “people using AI tools can succumb to ‘automation bias,’ a tendency to blindly trust decisions made by powerful software, ignorant to its risks and limitations.”

If anything, the software is too good at its job. Grainy suspect images run through facial recognition algorithms for photo lineups are highly likely to find people that look a lot like the suspect. In which case, says Gary Wells, a psychologist at Iowa State University who studies faulty eyewitness identifications, when those pictures are shown to victims, they are highly likely to make an ID, even if it is false.

AI to draft police reports not a good idea: ACLU

Solving the problem depends on the same key ingredients that underpin the larger global ecosystem of biometric technology: regulation and trust. And yet, who polices the police is a question that goes beyond biometrics.

A recent report from the ACLU notes that “police departments are adopting software products that use AI to draft police reports for officers” – and says that’s a very bad idea: “AI has many potential functions, but there is no reason to use it to replace the creation of a record of the officer’s subjective experience.”

Other organizations have raised concerns about the potential for civil and human rights violations in AI deployments, including biometric facial recognition, by the DEA and FBI.

And a 137-page federal joint agency report on law enforcement use of biometrics, published this month, offers biometric technology’s dual-edged implications, per the U.S. Department of Homeland Security (DHS), the Department of Justice (DOJ), and the White House Office of Science and Technology Policy.

In each case, technology is an enabler for human decisions. For biometric algorithms, there are standards, tests and certifications that govern their use. Regulating human behavior is much harder, especially in those who wield power. Algorithms have their flaws, but they are generally more predictable than people – and less likely to skip a step or two when someone’s freedom is on the line.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics connecting ID and payments through digital wallets, apps and passkeys

Biometrics are connecting with payment credentials, whether through numberless credit cards and banking apps or passkeys, as the concrete steps…

 

Reach of Musk, DOGE’s federal data access sets off privacy, security alarms

Led by tech billionaire Elon Musk and a shadowy team believed to be under his control, the United States DOGE…

 

Mobile driver’s licenses on the cusp of ‘major paradigm shift’

More entities have integrated the California mobile driver’s license (mDL) credential for identity verification. Although just 15 states have introduced…

 

Gesture-based age estimation tool BorderAge joins Australia age assurance trial

Australia’s age assurance technology trial is testing the new biometric tool that performs age estimation based on hand gestures. The…

 

European AI compliance project CERTAIN launches

The pan-European project to create AI compliance tools CERTAIN has kicked off its work, with the goal of making European…

 

Signaturit Group acquiring Validated ID for undisclosed sum

Spain-based digital identity and electronic signature provider Validated ID is being acquired by Signaturit Group, a European company offering identity…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events