State facial recognition regulations considered as ACLU and Clearview stand in for informed debate
Massachusetts new regulation of facial recognition diverges from the increasingly-worn paths of either preventing the technology’s benefits or not regulating its potential harms, The New York Times writes. The Times says that legislators give credit for that compromise to the American Civil Liberties Union (ACLU) of Massachusetts, and in particular Kade Crockford.
What the Times notes is “an all-or-nothing” approach has led to restrictions, moratoriums and bans, including actions by City Councils in Oakland, Portland, San Francisco and Minneapolis. One of the first local bans, in Sommerville, Massachusetts, was written in cooperation with the ACLU, which also supported the crafting of a proposal in Pennsylvania and other states.
Legislation is also pending in numerous jurisdictions, usually spurred explicitly by the ACLU, or a company that has come to stand as a cypher for irresponsible and intrusive surveillance.
Face biometrics regulation as ACLU vs Clearview
The ACLU of Massachusetts examined the practices of police in the state in 2019, and found that a 2015 communication between law enforcement agencies described a system for police to use facial recognition with no accompanying policy or legal threshold for use. They group also found that police in the state were using Clearview AI, which has become a recurring bogeyman invoked (rightly or wrongly) in public debate and even legislation itself.
At the federal level, Clearview is cited in a recent Congressional Research Service report, which notes media speculation on whether the attack on the U.S. Capitol will influence policymakers’ positions on facial recognition’s use by law enforcement.
The new Massachusetts law requires a judge’s permission for police to attempt a face biometrics search, sets rules for who can operate the systems, and stands up a commission to make further recommendations. It was also initially rejected by Governor Charlie Baker.
Crockford invokes Philip K. Dick’s ‘Minority Report,’ which revolves around a crime-prediction system, when discussing facial recognition, but said it is “politically impossible” to ban the technology in Massachusetts. A new state bill proposes further restrictions on face biometrics in public spaces, and Crockford says the process in “very much not done.”
Other ACLU offices and privacy groups have declared that all uses of facial recognition must be banned, the Times points out.
The ACLU has also launched a petition it plans to deliver to President Biden, demanding a ban on the use of facial recognition by federal law enforcement.
Both Virginia’s House and Senate have unanimously backed a bill to ban the use of facial recognition by local police in the state, The Virginian-Pilot writes (via Govtech).
The “de facto ban,” according to its sponsor Delegate Lashrecse Aird (D-Petersburg), would require the General Assembly to pass a law specifying that a particular agency can use face biometrics, and require them to control the system completely, which may not be technically feasible.
Aird previously introduced a less restrictive bill, which would have required local authorization. Her motivation for proposing regulation of face biometrics: an investigation by The Virginian-Pilot into Norfolk Police’s use of Clearview.
The lawmaker says she had toned down her proposal to win support in the senate, and was surprised when a Republican state senator strengthened its restrictions.
In Ann Arbor, Michigan, the Independent Community Police Oversight Commission expressed disapproval of any future use of facial recognition by city police, and support for a pre-emptive ban on the technology, according to Michigan Daily.
Commission Chair Lisa Jackson claimed that facial recognition does “a horrific job of identifying people of color” and is “terrible” at recognizing women.
The New York Civil Liberties Union (NYCLU) has complained to Police Commissioner Dermot Shea in a letter that the department has not taken its transparency requirements under the Public Oversight of Surveillance Technology (POST) Act.
The required disclosure by the department is missing information, the NYCLU writes, including the vendor of the face biometrics algorithms it uses, and it includes fields the advocacy group says were copy-and-pasted, sometimes failing to even describe the correct system.
The group repeats its allegations that the technology is generally biased against people with darker skin, and notes that the system’s effectiveness cannot be evaluated with the information provided.
Police in Mobile, Alabama meanwhile are using Clearview, according to an investigative report by NBC 15 News, and the State Department of Public Safety has paid the company $200,000 in the past two years.
Mobile Police Chief Lawrence Battiste, however, says facial recognition technology was not found to be reliable enough when its performance was compared in situations where police had already identified the suspect.
Another compromise coming?
Vermont Attorney General TJ Donovan is asking lawmakers in that state to loosen recently-enacted restrictions on facial recognition use by police to address a backlog of child-exploitation cases, VTDigger reports.
The commander of Vermont’s Internet Crimes Against Children Task Force, Detective Matthew Raymond, expressed concern that there may “be hundreds of kids waiting to be saved and we can’t get to them because we can’t use this technology currently.”
Representative Tom Burditt (R-West Rutland) serves as vice chair of the State House Judiciary Committee, and said while he supported the initial moratorium due to lower accuracy rates for non-white people, he supports the proposed change as a narrow exception.
The Vermont ACLU, however, opposes the measure as it is currently written, and wants the language in the new bill to restrict what databases can be used for comparison. Legislators are reported be working on language to prevent police from searching for suspects who are not already known to them (presumably meaning mugshots).
To ban or not to ban; the Biometrics Institute asks the question
The 11-page report traces recent criticisms of the technology, from the Times’ investigation of Clearview, through the wrongful arrest of a Detroit man which police blamed on facial recognition, and local bans in the U.S., as well as decisions in Europe.
NIST testing has been seized on as evidence of claims it does not support, the Institute points out, with a 98 percent identity verification rate for women cited as evidence of “bias.”
Other concerns are more legitimate, however, such as around the proportionality of fixed public cameras. The Institute considers the questions raised by these kinds of deployments, and by the impacts of COVID-19.
Finally, the organization poled a diverse group of members and other stakeholders, some of whom support a ban or moratorium on facial recognition, and some of whom do not.
“It is likely that the two options are legislative frameworks or more bans and moratoria,” the report concludes, and calls for multi-stakeholder dialogue to arrive at a sound position.