FB pixel

Those wronged by facial recognition join rights groups in calling for police ban

Racial bias and lack of accountability cited as major problems
Those wronged by facial recognition join rights groups in calling for police ban
 

There is a small but growing lobby made up of people who have been wrongly arrested as a result of facial recognition technology. Among them is Robert Williams, an American who was handcuffed in front of his family in 2020 after police facial recognition misidentified him as a suspect in a federal larceny case.

Williams is now calling for police forces in Ireland to scrap their plans to deploy the biometric tech. In comments made at an event in Dublin hosted by the Irish Council for Civil Liberties (ICCL) and issued in a release, Williams points to the risk that comes with using tools that are prone to misidentify people of color.

“Federal studies have shown that facial recognition systems misidentify Asian and Black people up to 100 times more often than white people,” Williams says. “In America, we’re trying to undo the harms that FRT has already done. Here in Ireland, you have an opportunity not to introduce it in the first place. I hope your government will listen to experiences like mine and think twice before bringing FRT into policing.”

Williams refers to a 2019 report from NIST, which has since been updated, showing that some algorithms were 10 to 100 times more likely to misidentify a Black or East Asian than a white face. Not all of the algorithms evaluated are in commercial production, however, and others were found to have imperceptible differences in performance between demographics, prompting NIST Biometric Standards and Testing Lead Patrick Grother to urge those implementing facial recognition to be specific in evaluating bias.

Williams’ statement on the U.S. could also be debated, given the uptake of facial recognition technology by law enforcement agencies across the country. And while it is true that Irish police could still decide to pass on facial recognition, it is unlikely. The government is in the process of drafting legislation that would give Gardaí access to FRT. And police in the neighboring UK have embraced facial recognition with aplomb.

Nor is it merely an island thing. Police in Sweden are currently pushing against the limits of the still-fresh AI Act with plans to deploy 1:N facial recognition in public spaces. And Canadian police recently contracted Idemia to provide facial recognition services.

Almost everyone facial recognition has misidentified in the US is Black

Pushback continues, however, on both sides of the Atlantic. A release says the Center for American Progress (CAP), the Lawyers’ Committee for Civil Rights Under Law and 14 other civil rights and advocacy groups have sent a letter to the National Institute for Justice (NIJ) regarding the use of AI in the criminal justice system, in response to the NIJ’s request for information ahead of a forthcoming report on the matter.

The letter does not pull punches. ​​

“Black people and other people of color face significant civil rights harms resulting from law enforcement’s use of AI,” it says. “Algorithmic technologies used by law enforcement agencies replicate and reinforce existing racial bias and discrimination; policing technologies such as facial recognition, risk assessments, and predictive policing tools often enable and accelerate this trend.”

The coalition says biases are “baked into the development and deployment” of AI-driven biometric tools. It points an accusatory finger at databases used to train AI engines – “vast troves of data that are rife with inaccuracies and reflect existing societal biases and inequities.” Furthermore, statistics show that the human police who use facial recognition are also prone to bias.

“For instance,” the letter says, “a recent report found that the New Orleans Police Department disproportionately used facial recognition technology (“FRT”) to identify Black individuals; they deployed the technology on Black people over 90 percent of the time.”

All but one of the seven of the people in the U.S. who have been misidentified by facial recognition and wrongfully arrested are Black.

Racial bias is not the only issue. The coalition also says transparency is lacking. “Shrouded in secrecy, law enforcement agencies use AI technologies – ranging from police surveillance to suspect identification and prison-management tools – without meaningful public input or oversight.” Without transparency, there is no accountability.

The letter concludes with four recommendations on what the NIJ should address in its forthcoming report. They read more like demands.

First, “law enforcement’s use of racially discriminatory technologies, including FRT and predictive policing tools, should be prohibited.”

Second, “the use of algorithmic surveillance tools should be prohibited in public places and in any setting that could chill the exercise of First Amendment rights.”

Third, “Independent pre- and post-deployment audits of AI technologies used for law enforcement purposes should be required.”

Finally, “law enforcement use of AI technologies should be disclosed to defendants in criminal cases.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics and injection detection for deepfake defense a rising priority

Biometrics integrations with injection attack detection to defend the latest front in the global battle against fraud, deepfakes, is the…

 

Biometric Update Podcast looks at the road to a global standard for age assurance

Episode 2 of the Biometric Update Podcast is a dispatch from the 2025 Global Age Assurance Standards Summit, held from…

 

WEF launches new DPI initiative focused on emerging tech, including biometrics

Global Digital Public Infrastructure (DPI) initiatives are lagging behind emerging technologies such as AI, which could lead to inefficiencies, bottlenecks…

 

Odds are good for biometrics firms in the global gambling sector

Gambling has always been a vice associated with certain kinds of criminal activity, but the development of the online gambling…

 

New Zealand issues tender for digital ID services accreditation infrastructure

New Zealand’s accredited digital identity services regulator, the Trust Framework Authority (TFA), has published a request for information (RFI) for…

 

Pindrop surpasses $100M in annual recurring revenue, kicks off BU podcast

A release from Atlanta-based voice biometrics firm Pindrop celebrates a milestone: the firm has surpassed US$100 million in Annual Recurring…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events