Senators urge federal agencies to end use of Clearview AI technology
A group of Democrat Senators and Representatives have written letters to five federal departments to urge them to end the use of facial recognition technology and single out Clearview AI facial recognition products as “particularly dangerous”. The lawmakers raise concerns over anonymity and the threats posed by facial recognition technology for communities of color and immigrant communities. Clearview AI counters each claim.
“Clearview AI’s technology could eliminate public anonymity in the United States,” write Senators Edward J. Markey (D-Mass.) and Jeff Merkley (D-Ore.), and Representatives Pramila Jayapal (WA-07) and Ayanna Pressley (MA-07) in letters sent to the Departments of Homeland Security, Justice, Defense, Interior and of Health and Human Services.
The lawmakers cite the August 2021 Government Accountability Office (GAO) report which collated federal use of facial recognition tools such including Clearview AI’s, typically beginning with a free trial. The Departments of Homeland Security, Justice, Interior and of Health and Human Services all reported using Clearview AI services for domestic law enforcement.
The lawmakers raise concerns that facial recognition technology poses unique threats to Black communities, other communities of color and immigrant communities and cite the National Institute of Standards and Technology (NIST) findings that people of color were up to 100 times more likely to be misidentified than white males by facial recognition algorithms.
They state that marginalized communities are already over-policed “and the proliferation of biometric surveillance tools is, therefore, likely to disproportionately infringe upon the privacy of individuals in Black, Brown, and immigrant communities.
“With respect to law enforcement use of biometric technologies specifically, reports suggest that use of the technology has been promoted among law enforcement professionals, and reviews of deployment of facial recognition technology show that law enforcement entities are more likely to use it on Black and Brown individuals than they are on white individuals.”
Clearview AI facial recognition products singled out
“Clearview AI reportedly scrapes billions of photos from social media sites without permission from or notice to the pictured individuals,” write the lawmakers. “In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified. Reports indicate that use of this technology is already threatening to do so.
“This is especially troubling because studies show that when individuals believe the government is surveilling them, they are likely to avoid engaging in activities protected by the First Amendment. The use of facial recognition technology runs the risk of deterring the public from participating in marches or rallies, or speaking out against injustice.”
The same group of lawmakers were part of a larger group which co-sponsored the revival of the Facial Recognition and Biometric Technology Moratorium Act in June 2021.
Clearview believes it is reducing bias
“I have the utmost respect for Senators Markey and Merkley, and Reps. Jayapal and Pressley. However, on the topic of Clearview AI, recent NIST testing has shown that Clearview AI’s facial recognition algorithm shows no detectable racial bias, and to this date, we know of no instance where Clearview AI’s technology has resulted in a wrongful arrest,” writes Clearview CEO Hoan Ton-That in a statement emailed to Biometric Update.
“In the NIST 1:1 Face Recognition Vendor Test (“FRVT”) that evaluates demographic accuracy, Clearview AI’s algorithm consistently achieved greater than 99 percent accuracy across all demographics.
“In the NIST 1:N FRVT, Clearview AI’s algorithm correctly matched the correct face out of a lineup of 12 million photos at an accuracy rate of 99.85 percent, which is much more accurate than the human eye,” writes Ton-That who goes on to cite the Innocence Project which helps to overturn wrongful convictions.
“According to the Innocence Project, 70 percent of wrongful convictions result from eyewitness lineups.” (To clarify, the Innocence Project figures are for 69 percent of the more than 375 wrongful convictions in the U.S. overturned by post-conviction DNA evidence.)
“Accurate facial recognition technology like Clearview AI is able to help create a world of bias-free policing. As a person of mixed race this is highly important to me,” writes Ton-That.
“We are proud of our record of achievement in helping over 3,100 law enforcement agencies in the United States solve heinous crimes, such as crimes against children and seniors, financial fraud and human trafficking. ”
Growing pressure abroad, more deals at home
Clearview AI is facing potential legal disputes in Canada and the UK over issues a range of issues relating to conformity to recommendations and how it operates.
While in the U.S. it continues to win new contracts with high profile organizations such as a research contract with the U.S. Air Force and an even smaller contract, this time for the FBI.
Article Topics
biometric identification | biometrics | Clearview AI | facial recognition | government purchasing | law enforcement | regulation | U.S. Government
Comments