Facial biometrics ban in Boston for local government use, flouted best practices cause concern
The dynamics of the biometric facial recognition market may be shifting, as failures by police to follow best practices for the technology’s use are fueling a furor over the automation of existing law enforcement processes.
Boston will not use facial recognition with any city government systems, as City Council unanimously passed an ACLU-backed ban, the Associated Press reports.
The move makes Boston the second-largest city in the U.S., after San Francisco, to restrict facial recognition’s use. Massachusetts cities Cambridge, Springfield, Northampton, Brookline and Somerville have passed similar local laws.
A pair of Boston city councillors released statements alleging the technology is racially discriminatory. The article notes “several studies” have shown higher error rates from the technology when identifying people with darker skin, but makes no mention of the nature of those studies, or that most of the algorithms they considered are not used by police in the U.S. or anywhere else.
The bill becomes law in 15 days, barring the unlikely event of Boston’s Mayor blocking it.
Running wild does not help
Recent events like the exposure of Clearview AI’s highly questionable business practices and allegedly illegal data collection techniques and the deaths of people from visible minorities at the hands of law enforcement officers may have made it more difficult for police to defend the use of facial recognition as a forensic tool.
At least six officers from Australia’s Victoria Police signed up to use Clearview AI between November of last year and March 2020, ZDNet reports.
A spokesperson for Victoria Police has said the app was tested by a small number of people, and no images related to investigations were uploaded during those tests.
“Feel free to run wild with your searches,” the company urged officers in marketing emails.
The article notes that other law enforcement agencies in Australia have tested Clearview AI’s technology, and that the company makes no mention in its promotional materials of best practices or official approvals.
The purchase of facial recognition systems by two police forces in the Toronto area have drawn scrutiny from Vice, which claims in the article’s sub-head that the technology has been shown to be racially biased, without referring to any particular algorithm or system.
Police in York Region budgeted $1.68 million in 2019 to stand up a system for face, palm and fingerprint identification over two years, and Peel Police confirmed they have a license for facial recognition.
York Regional Police admitted that some of its officers had trialed software from Clearview AI, after initially denying it.
Vice quotes an academic who claims the technology will make Toronto like “the Antebellum South.”
Tender documents show DataWorks Plus, Gemalto, Motorola, NEC and others have made bids or are participating as “plan takers.”
The use of facial recognition on protestors in the U.S. may also undermine arguments that law enforcement will not misuse the technology.
The U.S. Constitution provides protection for people’s faces in public, which has been supported by a bi-partisan legislation proposal and recent Supreme Court decisions, Womble Bond Dickenson (U.S.) LLP Senior Partner Theodore F. Claypoole writes for The National Law Review.
Claypoole argues that the warrant system already provides the appropriate checks and balances to mitigate potential harms from facial recognition, just as it does for other forms of investigation.
The Supreme Court ruled seventy years ago that the rights to free speech and free association entail a right to not have one’s identity associated with a political cause, according to the article.
The United Nations High Commissioner for Human Rights Michelle Bachelet meanwhile has called for a moratorium on the use of facial recognition on peaceful protestors, Yahoo News reports. The call came as her office published a report commissioned two years ago by the UN Human Rights Council on the impact of new technology on free assembly, including protest.
The use of the technology during protests should be paused “until states meet certain conditions including human rights due diligence before deploying it” according to Bachelet’s statement. The report, which is much more balanced than many calls for limitations to the technology’s use, warns of facial recognition’s potential to deter public expression, and that it “may also perpetuate and amplify discrimination, including against Afro-descendants and other minorities.”
“Facial recognition should not be deployed in the context of peaceful protests without essential safeguards regarding transparency, data protection, and oversight in place,” Bachelet concludes.
Best practices emphasized by Cognitec, Rank One
In stark contrast to Clearview’s approach, some facial recognition providers are doubling down on the need for police to follow proper procedures to benefit from facial biometrics.
A statement by Cognitec points out that facial recognition does not make identification decisions, but rather probabilistic judgements meant to generate leads. In response to several tech giants pulling out of the market, Cognitec writes that the technology is a proven tool for fighting serious crime and identifying missing children and human trafficking victims.
“Racial biases are a human condition,” the company says, and algorithms can be trained to make judgements free from any prejudices held by those that use them.
“Nevertheless, the technology requires ongoing training with more diverse data. Vendors should implement best practices that identify and minimize any hidden biases, establish metrics for fairness, and test algorithms in real-world scenarios,” the company writes.
“The dangers of misuse and technical limitations of face recognition technologies are well known, well documented and well argued. And so are their many benefits! But governments worldwide continue to struggle with providing sensible regulations that set definite rules for face recognition applications, especially for law enforcement. Cognitec strives to contribute mindful expertise to the establishment of clear guidelines that benefit each of us personally and society as a whole.”
Rank One, which has been blamed for the wrongful arrest of a Black man in Detroit in what appears to be a case of police misconduct, responded in more detail to the initial reports of the incident in communications both specifically to Biometric Update and to the world at large.
Initial coverage has emphasized allegations of poor performance by facial recognition algorithms matching people with dark skin, however NIST testing actually showed that Rank One’s algorithm delivers lower error rates for Black men than any other demographic group.
The company has published a blog on “How Forensic Face Recognition Works,” which points out to a general audience that impressions that U.S. police are operating a mass real-time facial recognition system are false. Such a system, according to the post, would clearly violate the Constitution’s Fourth Amendment (see Claypoole’s argument above). For the same audience, and perhaps some law enforcement personnel, the post then explains how forensic facial recognition is actually used, in accordance with best practices.
In an email to Biometric Update, Rank One Co-founder and CEO Brendan F. Klare noted that the image shown in news articles appears to be of low quality, and could have been flagged by face recognition quality metrics as unlikely to produce “a high fidelity match.”
Even more damning, Klare notes that while a trained facial recognition examiner made an incorrect determination, the investigative lead report produced by the examiner “stated in bold letters: “THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE FOR ARREST.” The report also states clearly that “further investigation is needed to develop probable cause for arrest,” but a loss prevention contractor who identified the wrongfully arrested man was not a witness, and his testimony therefore does not contribute to establishing probable cause.
This, according to Klare, is the main problem, “in that it is against standard policies for a face recognition investigative lead to be used as probable cause for arrest. This incident marks the first known false arrest involving the use of face recognition by law enforcement in the U.S., despite over a decade of use, and highlights why the results of a face recognition search plus a facial examiner adjudication alone, without independent corroborating evidence, does not constitute probable cause for an arrest warrant.”
Article Topics
accuracy | best practices | biometrics | facial recognition | legislation | police | regulation | video surveillance
Comments