FB pixel

US OSAC argues live facial recognition can protect people without violating privacy

US OSAC argues live facial recognition can protect people without violating privacy
 

A U.S. government organization has quietly published guidance on the use of live facial recognition, suggesting that the technology can effectively protect people without violating their rights or privacy laws.

The OSAC (Organization of Scientific Area Committees for Forensic Science) Technical Guidance Document was published in January, and is hosted by the National Institute of Standards and Technology. “Framework for Implementing Passive Live Facial Recognition” is a 29-page report written by the OSAC Facial & Iris Identification Subcommittee and released near the beginning of 2024.

Bolded within the executive summary is a warning that “Central to the ethical implementation of a live facial recognition capability is the consideration of proportionality, human rights and the right to privacy.”

“This document describes ‘privacy-by-design’ features that should be implemented in support of maintaining people’s anonymity,” the abstract continues. This is possible to do today, the paper argues, citing reports from The National Security Commission on Artificial Intelligence, The Biometrics Institute, the UK’s Biometrics and Surveillance Camera Commissioner’s Office and a framework written for UK police.

The guidance includes advice on key performance metrics for system accuracy and recommendations for successful implementation of live facial recognition.

The potential scope of application for live facial recognition, according to the paper, includes identification of wanted individuals, alerts when a person “who may cause harm” enters a given area, like a registered sex offender entering school grounds, or identifying people who may harm themselves or others, like stalkers, terrorists, or missing persons.

Real-time facial recognition systems should refer to a watchlist, the document says, with all images and templates of non-matched individuals. Context images including the face of an individual who is matched should have all other faces redacted.

The paper also identifies three myths relating to concerns about facial recognition. The myths are that facial recognition is illegal, that “facial recognition is inaccurate and biased,” and perhaps most controversial among the claims, that “LFR is intrusive and impacts on citizen privacy.”

Supporting the last claim, OSAC says “privacy is considered at every stage” if the system is properly implemented. Only those on the watchlist are identified, and it is not possible to track people in their daily activities. The footprint of the system should be limited and specific. Only a fraction of those processed by the system will trigger an alert, and only a fraction of alerts will trigger an action, with a human reviewer performing identity confirmation.

The paper’s design guidelines address cameras and their positioning, network architecture, and the configuration of facial recognition software.

Advice is provided on decision threshold scores, watchlist composition, and how to understand the accuracy of systems in which both “false alerts” and “missed alerts” contribute to overall performance.

The guidance concludes with eight recommendations. These include paying “due regard” to legal and ethical obligations, algorithm accuracy and demographic differentials, and matching the system parameters to the concept of operations. Additional testing and tuning should be performed, based on appropriate standards and guidelines, and in operation, a human must be kept in the loop and policies in place around data retention and use.

The recommendation that likely stands out most to privacy advocates as begging the question, if not rich in sad irony, is that privacy by design should be built into live facial recognition systems

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Stop treating identity as a compliance step. It’s infrastructure now

By Harry Varatharasan, Chief Product Officer, ComplyCube The UK governmentʼs digital identity consultation is closing, and for most commentators, this…

 

If you build it, they will leave: experts warn UK gov’t on digital ID approach

The UK Cabinet Office’s consultation on digital identity closed on Tuesday, Digital systems built by governments tend to decline over…

 

Shufti biometric PAD clears iBeta Level 3 with 0 errors across iOS, Android

London-based global identity verification and fraud prevention provider Shufti has passed a Level 3 evaluation of its biometric Presentation Attack…

 

OpenID draft spec for extended identity claims assurance up for approval

Voting is open for approval of a draft specification to extend OpenID Connect to cover new features for requesting and…

 

EES troubles ignite speculation of further suspensions

Crowds, chaos and cranky travelers: The EU’s biometric border management scheme, the Entry-Exit System (EES), continues to fill headlines as…

 

UK Home Office eyes suppliers for SCBP biometrics platform

The Home Office is hosting a preliminary market engagement event to engage with potential suppliers for two not-yet-guaranteed future procurements…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events