US OSAC argues live facial recognition can protect people without violating privacy

A U.S. government organization has quietly published guidance on the use of live facial recognition, suggesting that the technology can effectively protect people without violating their rights or privacy laws.
The OSAC (Organization of Scientific Area Committees for Forensic Science) Technical Guidance Document was published in January, and is hosted by the National Institute of Standards and Technology. “Framework for Implementing Passive Live Facial Recognition” is a 29-page report written by the OSAC Facial & Iris Identification Subcommittee and released near the beginning of 2024.
Bolded within the executive summary is a warning that “Central to the ethical implementation of a live facial recognition capability is the consideration of proportionality, human rights and the right to privacy.”
“This document describes ‘privacy-by-design’ features that should be implemented in support of maintaining people’s anonymity,” the abstract continues. This is possible to do today, the paper argues, citing reports from The National Security Commission on Artificial Intelligence, The Biometrics Institute, the UK’s Biometrics and Surveillance Camera Commissioner’s Office and a framework written for UK police.
The guidance includes advice on key performance metrics for system accuracy and recommendations for successful implementation of live facial recognition.
The potential scope of application for live facial recognition, according to the paper, includes identification of wanted individuals, alerts when a person “who may cause harm” enters a given area, like a registered sex offender entering school grounds, or identifying people who may harm themselves or others, like stalkers, terrorists, or missing persons.
Real-time facial recognition systems should refer to a watchlist, the document says, with all images and templates of non-matched individuals. Context images including the face of an individual who is matched should have all other faces redacted.
The paper also identifies three myths relating to concerns about facial recognition. The myths are that facial recognition is illegal, that “facial recognition is inaccurate and biased,” and perhaps most controversial among the claims, that “LFR is intrusive and impacts on citizen privacy.”
Supporting the last claim, OSAC says “privacy is considered at every stage” if the system is properly implemented. Only those on the watchlist are identified, and it is not possible to track people in their daily activities. The footprint of the system should be limited and specific. Only a fraction of those processed by the system will trigger an alert, and only a fraction of alerts will trigger an action, with a human reviewer performing identity confirmation.
The paper’s design guidelines address cameras and their positioning, network architecture, and the configuration of facial recognition software.
Advice is provided on decision threshold scores, watchlist composition, and how to understand the accuracy of systems in which both “false alerts” and “missed alerts” contribute to overall performance.
The guidance concludes with eight recommendations. These include paying “due regard” to legal and ethical obligations, algorithm accuracy and demographic differentials, and matching the system parameters to the concept of operations. Additional testing and tuning should be performed, based on appropriate standards and guidelines, and in operation, a human must be kept in the loop and policies in place around data retention and use.
The recommendation that likely stands out most to privacy advocates as begging the question, if not rich in sad irony, is that privacy by design should be built into live facial recognition systems
Article Topics
biometric identification | biometrics | data privacy | facial recognition | NIST | OSAC | Privacy by Design
Comments