New Jersey law enforcement blocked from using facial recognition tech from Clearview AI
New Jersey is no longer allowing law enforcement in the state to use biometric facial recognition app Clearview AI, following a moratorium put forward by state Attorney General Gurbir Grewal, writes Mashable.
“AG asked that all law enforcement agencies in New Jersey stop using Clearview’s technology until we get a better handle on the situation,” reads an email from the New Jersey Attorney General’s Director of Communications, Sharon Lauchaire. “We have communicated this request to the 21 County Prosecutors, and asked that they share it with all of the police departments and other law enforcement agencies within their respective jurisdictions.”
The ACLU of New Jersey applauded the initiative and expressed concern that facial recognition technology could lead to “to discrimination and false-positives of people of color, women, and non-binary people.”
“I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” Democratic Sen. Edward Markey of Massachusetts wrote in a letter to the CEO of Clearview AI, Hoan Ton-That.
Markey is asking Clearview to name the law enforcement agencies that licensed the technology, to inform the public about security breaches which may have occurred and employee access privileges. Markey also wants to know if the database contains biometric data of children under 13.
“Any technology with the ability to collect and analyze individuals’ biometric information has alarming potential to impinge on the public’s civil liberties and privacy,” reads Markey’s letter. “Clearview’s product appears to pose particularly chilling privacy risks.”
A Buzzfeed investigation revealed Clearview AI boasted in its marketing pitch that its facial recognition software had been used by the NYPD to identify a terrorism suspect. The NYPD denied the company’s involvement in its case.
The New York Times recently reported that the app can cross-match submitted photos against a database of over 3 billion photos collected from the open web including from social media platforms Twitter, Facebook, Instagram and YouTube.
Wired writes that Clearview AI violated the policies of a number of companies with its automated web scraping. Companies have suggested that such practices may even have violated the Computer Fraud and Abuse Act with unauthorized access of user data, though the Ninth Circuit Court of Appeals ruled last year that automated web scraping does not violate the Computer Fraud and Abuse Act. Last week, Twitter demanded Clearview AI stop scraping and delete biometric data it had collected from its platform. The New York-based AI company used the database to develop a facial recognition app that it claims was licensed by 600 law-enforcement agencies.
A number of civil liberties groups and tech companies have been collectively asking for a federal law that would protect the right to privacy and biometric data.
Clearview AI is facing a potential class action lawsuit for allegedly violating the Illinois’ Biometric Information Privacy Act (BIPA) by collecting the facial biometrics of Illinois residents without consent.
best practices | biometrics | Clearview AI | ethics | facial recognition | police | privacy