FB pixel

Senators ask DoJ if federal facial recognition funding violates civil rights

Senators ask DoJ if federal facial recognition funding violates civil rights
 

A group of 18 American senators from the Democratic Caucus have written to the Department of Justice to express concerns that funding for facial recognition programs could contribute to civil rights violations.

Senator Reverend Raphael Warnock (D-GA), Senate Judiciary Committee Chair Richard Durbin and 16 other senators pose eight questions to the DoJ about what assurances Americans have that federal funds aren’t violating Title VI of the Civil Rights Act. Their concerns are sparked by the recent news about the false arrest of Randal Quran Reid and five other cases where facial recognition has been used prior to arrests of innocent Black people.

Title VI stipulates that no program or activity that discriminates against Americans can receive federal funding.

The senators’ letter refers to a study from 2012 by ROC.ai C-founder and Chief Scientist Brendan Klare and others. It also refers to a NIST report on demographic differentials, but bases its argument on the 2019 original, and avoids mentioning the 2022 update. The reference also generalizes among the more than 100 algorithms assessed by NIST, something the report’s lead author specifically warns against.

“Errors in facial recognition technology can upend the lives of American citizens,” the letter states. “Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws.”

It also notes that DoJ was found to be using 11 different facial recognition systems in a 2021 GAO report, and that use of the technology is expanding both at DoJ and other federal agencies.

No reference is made to the policy violations and best practice breakdowns observed in the cases of wrongful arrest involving facial recognition.

The senators are asking the DoJ to respond by the end of February.

Underlying problems with who is searched, and why

The history of misidentifications with facial recognition by law enforcement authorities is an expression of bias that remains technically embedded in facial recognition systems, as well as how they are used, an American academic argues.

Drexel University Associate Professor of Bioethics and History Sharrona Pearl, writing in The Conversation, argues that “face recognition technology has invaded everyone’s privacy” and “paying particular attention to those whom society and its structural biases deem to be the greatest risk.”

The analysis acknowledges that false positives have “declined dramatically,” but argues that as a technology with development funded for border surveillance and similar goals, it is tainted by the same kind of racist ideology as the debunked pseudoscience phrenology.

The frequently-offered argument that the biometric technology is only used to investigate serious crimes, thus sparing most people from potential harms, is undermined by an incident in Florida, in which Miami police used Clearview AI’s facial recognition to identify and then arrest a homeless man.

The homeless man in the incident, reported by Reason, was arrested for “obstruction by a disguised person” after providing an assumed name and birthdate. The charge was dropped by prosecutors for lack of probable cause.

The report quotes a staff attorney with the Electronic Frontier Foundation saying that the encounter shows how facial recognition is often “used to mass surveil the population” rather than fight the crimes it is ostensibly deployed to fight.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Japan moves toward age verification for social media filters and risk labels

Japan’s policymakers are considering their own version of age assurance for social media with content filtering taking the limelight. Nikkei…

 

AVPA plots course for age assurance future based on learnings from Australia

In 2025, few people on Earth logged as many travel miles as Iain Corby, the executive director of the Age…

 

Regula analysis finds ID document verification hardest for Arabic, Chinese, Japanese

While the Latin alphabet is the alpha and omega for around 40 percent of the world’s people, that still leaves…

 

London police win legal challenge against live facial recognition deployment

London’s Met Police force has won a legal challenge to its use of live facial recognition, allowing them to continue…

 

Roblox settles with Alabama, West Virginia, agrees to age checks for users under 16

Social gaming platform Roblox is settling its accounts. Having settled with the State of Nevada for $12.5 million over lawsuits…

 

YouTube offers its biometric deepfake detection tool to celebrities

After content creators, politicians and journalists, YouTube will also enable celebrities to access its likeness detection tool, allowing them to…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events