Law enforcement facial recognition examined as DHS called on to halt Clearview AI use
A coalition of more than 70 rights and civil liberties groups have called on the U.S. Department of Homeland Security (DHS) to stop all use of Clearview AI’s face biometrics in a letter to Homeland Security Secretary Alejandro Mayorkas.
The groups, including Mijente, The Center on Privacy & Technology at Georgetown Law, the ACLU, Electronic Frontier Foundation, and the Project on Government Oversight (POGO) review Clearview’s censure by New Jersey’s Attorney General, Canadian Privacy Commissioners and other tech companies, and the frequent use of free trials of Clearview’s facial recognition software by law enforcement officers. DHS agencies have also not been forthcoming about their use of the technology, they say, necessitating further action while they wait for their lawsuit to access their records to play out in court.
San Mateo Country Sheriff’s Office has tested Clearview’s biometrics around 2,000 times, and is now considering purchasing a license for it, according to the Half Moon Bay Review.
Officers with San Mateo Country began trialing Clearview in 2018, though they are not currently using it, Assistant Sheriff Ed Wood said. It was found to be useful for identifying a person of interest about half of the time. A committee is currently considering whether to pay for a license, which typically cost around $1,000 per user per year.
BuzzFeed News reporter Ryan Mac, who has broken large portions of the Clearview AI story, criticized the lack of transparency capabilities provided by Clearview to police departments.
POGO Constitution Project Senior Counsel Jake Laperruque emphasizes the comments in a recent BuzzFeed News article from a law enforcement officer about the lack of effectiveness of Clearview on lower-quality images captured in the field.
This contrasts with the marketing materials from the company, which he says give an unrealistically optimistic impression of what the facial recognition software is capable of.
With both police use of and public interest in face biometrics increasing, Consumer Reports takes up the case. The article notes that security camera footage is often not sufficient in quality to be used for biometric matching.
It also addresses demographic differentials with reference to research from 2019 and earlier, and accepts at face value the Detroit Police Department’s statement that “automation bias” led the force’s detectives to repeatedly violate procedure in the wrongful arrest of Robert Williams.
Mixing law enforcement and commercial deployments of forensic one-to-one and real-time one-to-many systems without differentiation, Consumer Reports notes that healthy skepticism of facial recognition could become even more important.
The 22-member panel mandated by Massachusetts lawmakers as part of a police reform bill has begun its discussion of facial recognition’s reliability, and associated ethical and legal concerns, the Gloucester Times reports.
The panel of legislators, educators, retired judges, civil liberties advocates, law enforcement officials and technology experts is required to report its findings by the end of the year.
Vancouver police formulating new policy
In response to the decision by the British Columbia Provincial and Federal Privacy Commissioners that Clearview violates Canadian law, the Vancouver Police Department (VPD) is researching best practices and planning to draft a new policy by the end of the year, reports Vancouver is Awesome.
A police report sets out conditions for testing other face biometrics software, but police have been instructed not to use the technology for investigations until the policy is formulated.
The department’s prior use of Clearview’s biometrics is described as one search as part of a child pornography investigation, which did not assist in the investigation.
Russia reportedly targeting protestors, turning off cameras
Moscow Mayor Sergei Sobyanin has said that the city’s NTechLab-powered network of facial recognition cameras is now used in 70 percent of criminal investigations, The Washington Post reports, but with the system also appears to be in use against protestors.
Russia has grappled with protests before and since jailing political opposition leader Alexei Navalny, and the Post quotes lawyers and people detained in the country as saying law enforcement is using face biometrics for social control. The cameras allegedly do not point to areas where senior state officials live, however.
Following Navalny’s poisoning, an agent of Russia’s FSB agency reportedly said that the cameras were turned off when the poisoning was carried out to ensure the perpetrators would not be caught.
The system is also notoriously leaky, with personal information widely available online and a police officer recently arrested for trading data.
New Zealand Police consult with researchers
Police in New Zealand are undertaking a consultation with a pair of academic researchers on how they can use facial recognition, reports TVNZ.
Dr. Nessa Lynch of Victoria University of Wellington and Dr. Andrew Chen of Auckland University will define the technology, categorize its spectrum of use and potential impacts on individual and collective rights and interests, consider current police use, relate international practices and various obligations, and issue recommendations and advice on what uses of face biometrics by police would be safe and appropriate.
Lynch and Chen both say they see potential benefits and risks from the technology’s use.
“The pace of technological change has outstripped law and regulation,” Lynch observes.
MEPs call for tougher public biometrics restrictions
A cross-party group of 40 members of European Parliament are calling for an outright ban on facial recognition and other biometrics in public spaces, Security Matters writes.
The legislators called for the EU’s proposed restrictions, which could include a moratorium on the technology’s use in public spaces, to be strengthened in a leaked policy proposal.
Corsight Chief Privacy Officer and former UK Surveillance Camera Commissioner Tony Porter pushed back on the new proposal, saying that it fails to accurately depict biometric surveillance, neglects any attempt to recognize potential benefits of its use.
A final regulation on ‘high risk AI’ is expected to be announced the European Commission this week.
Porter suggests that “if the initial proposed regulation seeks to bring the rest up to the standards of the best, then that must only be a good thing.”