FB pixel

Those wronged by facial recognition join rights groups in calling for police ban

Racial bias and lack of accountability cited as major problems
Those wronged by facial recognition join rights groups in calling for police ban
 

There is a small but growing lobby made up of people who have been wrongly arrested as a result of facial recognition technology. Among them is Robert Williams, an American who was handcuffed in front of his family in 2020 after police facial recognition misidentified him as a suspect in a federal larceny case.

Williams is now calling for police forces in Ireland to scrap their plans to deploy the biometric tech. In comments made at an event in Dublin hosted by the Irish Council for Civil Liberties (ICCL) and issued in a release, Williams points to the risk that comes with using tools that are prone to misidentify people of color.

“Federal studies have shown that facial recognition systems misidentify Asian and Black people up to 100 times more often than white people,” Williams says. “In America, we’re trying to undo the harms that FRT has already done. Here in Ireland, you have an opportunity not to introduce it in the first place. I hope your government will listen to experiences like mine and think twice before bringing FRT into policing.”

Williams refers to a 2019 report from NIST, which has since been updated, showing that some algorithms were 10 to 100 times more likely to misidentify a Black or East Asian than a white face. Not all of the algorithms evaluated are in commercial production, however, and others were found to have imperceptible differences in performance between demographics, prompting NIST Biometric Standards and Testing Lead Patrick Grother to urge those implementing facial recognition to be specific in evaluating bias.

Williams’ statement on the U.S. could also be debated, given the uptake of facial recognition technology by law enforcement agencies across the country. And while it is true that Irish police could still decide to pass on facial recognition, it is unlikely. The government is in the process of drafting legislation that would give Gardaí access to FRT. And police in the neighboring UK have embraced facial recognition with aplomb.

Nor is it merely an island thing. Police in Sweden are currently pushing against the limits of the still-fresh AI Act with plans to deploy 1:N facial recognition in public spaces. And Canadian police recently contracted Idemia to provide facial recognition services.

Almost everyone facial recognition has misidentified in the US is Black

Pushback continues, however, on both sides of the Atlantic. A release says the Center for American Progress (CAP), the Lawyers’ Committee for Civil Rights Under Law and 14 other civil rights and advocacy groups have sent a letter to the National Institute for Justice (NIJ) regarding the use of AI in the criminal justice system, in response to the NIJ’s request for information ahead of a forthcoming report on the matter.

The letter does not pull punches. ​​

“Black people and other people of color face significant civil rights harms resulting from law enforcement’s use of AI,” it says. “Algorithmic technologies used by law enforcement agencies replicate and reinforce existing racial bias and discrimination; policing technologies such as facial recognition, risk assessments, and predictive policing tools often enable and accelerate this trend.”

The coalition says biases are “baked into the development and deployment” of AI-driven biometric tools. It points an accusatory finger at databases used to train AI engines – “vast troves of data that are rife with inaccuracies and reflect existing societal biases and inequities.” Furthermore, statistics show that the human police who use facial recognition are also prone to bias.

“For instance,” the letter says, “a recent report found that the New Orleans Police Department disproportionately used facial recognition technology (“FRT”) to identify Black individuals; they deployed the technology on Black people over 90 percent of the time.”

All but one of the seven of the people in the U.S. who have been misidentified by facial recognition and wrongfully arrested are Black.

Racial bias is not the only issue. The coalition also says transparency is lacking. “Shrouded in secrecy, law enforcement agencies use AI technologies – ranging from police surveillance to suspect identification and prison-management tools – without meaningful public input or oversight.” Without transparency, there is no accountability.

The letter concludes with four recommendations on what the NIJ should address in its forthcoming report. They read more like demands.

First, “law enforcement’s use of racially discriminatory technologies, including FRT and predictive policing tools, should be prohibited.”

Second, “the use of algorithmic surveillance tools should be prohibited in public places and in any setting that could chill the exercise of First Amendment rights.”

Third, “Independent pre- and post-deployment audits of AI technologies used for law enforcement purposes should be required.”

Finally, “law enforcement use of AI technologies should be disclosed to defendants in criminal cases.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Sierra Leone consults to amend civil registration legislation

The National Civil Registration Authority of Sierra Leone (NCRA) is reviewing its current civil registration law to identify gaps that…

 

iProov biometrics and liveness detection to secure workforce IDs on Microsoft Entra

Enterprise workers using Microsoft Entra ID can now use iProov biometrics and liveness detection to log into company systems through…

 

Malaysia’s prime minister loses it with MyDigital ID’s slow progress

Malaysia’s leader has voiced deep frustration with the slow progress in two key national digital initiatives. This week it was…

 

IDVerse acquired by LexisNexis to boost biometric fraud protection

LexisNexis Risk Solutions has struck a deal to acquire IDVerse adding biometric fraud protection to its portfolio of analytics and…

 

Intellicheck to provide identity validation for Accio Data

Intellicheck, Inc. has announced an integration with Accio Data to streamline background screening checks for job applicants. A release from…

 

UK digital age assurance receives support from stakeholders: Reports

UK’s attempts to legalize digital age assurance technology are likely to be successful, according to media reports. In January, the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events