Senators ask DoJ if federal facial recognition funding violates civil rights
A group of 18 American senators from the Democratic Caucus have written to the Department of Justice to express concerns that funding for facial recognition programs could contribute to civil rights violations.
Senator Reverend Raphael Warnock (D-GA), Senate Judiciary Committee Chair Richard Durbin and 16 other senators pose eight questions to the DoJ about what assurances Americans have that federal funds aren’t violating Title VI of the Civil Rights Act. Their concerns are sparked by the recent news about the false arrest of Randal Quran Reid and five other cases where facial recognition has been used prior to arrests of innocent Black people.
Title VI stipulates that no program or activity that discriminates against Americans can receive federal funding.
The senators’ letter refers to a study from 2012 by ROC.ai C-founder and Chief Scientist Brendan Klare and others. It also refers to a NIST report on demographic differentials, but bases its argument on the 2019 original, and avoids mentioning the 2022 update. The reference also generalizes among the more than 100 algorithms assessed by NIST, something the report’s lead author specifically warns against.
“Errors in facial recognition technology can upend the lives of American citizens,” the letter states. “Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws.”
It also notes that DoJ was found to be using 11 different facial recognition systems in a 2021 GAO report, and that use of the technology is expanding both at DoJ and other federal agencies.
No reference is made to the policy violations and best practice breakdowns observed in the cases of wrongful arrest involving facial recognition.
The senators are asking the DoJ to respond by the end of February.
Underlying problems with who is searched, and why
The history of misidentifications with facial recognition by law enforcement authorities is an expression of bias that remains technically embedded in facial recognition systems, as well as how they are used, an American academic argues.
Drexel University Associate Professor of Bioethics and History Sharrona Pearl, writing in The Conversation, argues that “face recognition technology has invaded everyone’s privacy” and “paying particular attention to those whom society and its structural biases deem to be the greatest risk.”
The analysis acknowledges that false positives have “declined dramatically,” but argues that as a technology with development funded for border surveillance and similar goals, it is tainted by the same kind of racist ideology as the debunked pseudoscience phrenology.
The frequently-offered argument that the biometric technology is only used to investigate serious crimes, thus sparing most people from potential harms, is undermined by an incident in Florida, in which Miami police used Clearview AI’s facial recognition to identify and then arrest a homeless man.
The homeless man in the incident, reported by Reason, was arrested for “obstruction by a disguised person” after providing an assumed name and birthdate. The charge was dropped by prosecutors for lack of probable cause.
The report quotes a staff attorney with the Electronic Frontier Foundation saying that the encounter shows how facial recognition is often “used to mass surveil the population” rather than fight the crimes it is ostensibly deployed to fight.
Article Topics
biometric-bias | biometrics | criminal ID | facial recognition | police | regulation | United States
Comments