Industry, civil society weigh in on biometrics policy in letters to White House
Stakeholders and advocates are chiming in with responses to a U.S. government request for information on public and private sector uses of biometrics, providing a snapshot of a social dialogue straining against the inertia of misunderstanding and self-interest.
The Office of Science and Technology Policy (OSTP) posted the RFI in October to gather information on “past deployments, proposals, pilots, or trials, and current use of biometric technologies” for identity verification, identification, and attribute inferencing.
When comments closed over the weekend, submissions had been made by various stakeholder groups. A somewhat optimistic review might conclude that opinion is converging at least around the need for clear definitions, the value being provided by NIST’s efforts and the need for more federal government action.
Oosto has shared its ‘Scale of Sensitivity’ for six common facial recognition applications with the White House. The use cases and their associated risks were set out by the company late last year as part of a proposal on AI ethics compliance.
An open letter by Oosto CEO Avi Golan reviews the company’s products and how they can contribute to physical security, offering an explanation for watchlists and safelists. He then discusses the company’s efforts to ensure responsible stewardship, including its ethics review board, terms of service, and advanced privacy settings. The potentially thorny topic of the sources of data Oosto uses is addressed, and then the six use cases explored.
“It is critical that government leaders recognize the power of visual AI to save and sustain lives,” states Golan. “Visual AI today is often misunderstood or misrepresented. As a world-leading firm in this space, we encourage regulators to conduct thoughtful due diligence in order to provide meaningful guidance and an appropriate legal framework regulating the use of biometrics in context-specific scenarios. Moreover, we need a cohesive national policy for the ethical use of facial recognition vs. a patchwork quilt of differing state-level regulations which make commercial compliance challenging.”
Golan argues that the use of face biometrics to empower businesses and governments is not mutually exclusive with protection for civil liberties and privacy rights, and calls for a common definition of “ethical facial recognition.”
Clearview AI recommends five requirements for facial recognition technology providers, including offering user training and mandatory adherence to a set of Facial Recognition Safety Principles. The principles, also outlined in the document, pertain to minimum standards for accuracy and consistency among demographic groups, privacy, auditability, secondary review of results, use policies, the status of biometric matches as leads, rather than direct evidence, and bans on the technology’s use to target constitutionally protected activities.
Prohibiting facial recognition’s use on people participating in protected activities, presumably including protestors, and enacting accuracy and non-discrimination requirements are included in a list of seven proper and reasonable safeguards suggested by Clearview.
The company characterizes facial recognition as a critical public safety tool, and quotes Center for Strategic and International Studies Senior Vice President James Andrew Lewis, “(t)he level of confusion and misinformation in the FRT (facial recognition technology) discussion is astounding… FRT is improving rapidly, and any critique based on data from even a few years ago runs the risk of being entirely wrong.”
A group of legal experts and civil society advocates including Haki na Sheria’s Yussuf Bashir and CIPIT Research Fellow Grace Mutung’u have also weighed in, calling for a comprehensive government response to biometric technologies in the form of moratoriums on mandatory participation in biometric systems, legislation targeting disparate impacts, and a broad review of human rights impacts from biometrics.
A letter written in response to the RFI and co-signed by the Digital Welfare State & Human Rights Project, Center for Human Rights and Global Justice (CHRGJ) at the NYU School of Law and the Institute for Law, Innovation & Technology (iLIT) at Temple University’s Beasley School of Law charges that biometric technologies pose “existential threats…to human rights, democracy, and rule of law.”
Elizabeth M. Renieris and Yong Suk lee of the Notre Dame Technology Ethics Center (ND TEC) focus on the use cases and potential harms of biometric technology. Their letter therefore presents a litany of potential problems related to bias, bad science, and data privacy risks, though the authors note that they “are encouraged by the OSTP’s efforts to consider policies that can equitably harness the benefits of these technologies while providing effective and iterative safeguards against their anticipated abuses and harms.”
Federal privacy legislation and clear definitions are the necessary starting point for good biometrics policy, a Washington-based software industry group says.
The Software & Information Industry Association (SIIA) has commended OSTP’s efforts to create a Bill of Rights for an Automated Society, or ‘AI Bill of Rights,’ and recommends that the body support current efforts to develop guidelines for responsible and ethical use for biometrics and other AI technologies. That means supporting the work of the National Institute of Standards and Technology (NIST).
SIIA also recommends a distinction between private and public sector uses of biometrics, and expresses concern that including “derived data” in the definition of biometric technologies could unnecessarily muddy the waters.
The Information Technology Industry Council (ITIC) is calling for OSTP to focus on leveraging existing standards and frameworks to regulate high-risk applications of biometrics.
The group further echoes Oosto on the need to distinguish between use cases, and the SIIA on the need to differentiate between public and private sector applications. Likewise, ITIC agrees with the SIIA on the need to support NIST, specifically its work on an AI Risk Management Framework.
(Updated January 20, 2022 at 4:02pm with comment from Clearview AI)