FB pixel

Protecting privacy, preventing bias in facial recognition essential, RAND study says

‘Identifying privacy and bias issues … as early as possible enables the mitigation of future risks’
Protecting privacy, preventing bias in facial recognition essential, RAND study says
 

How society can benefit from and use biometric facial recognition while still protecting privacy is one of the two central questions addressed in the new RAND Corp. study, Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias. The other problem considered is what “methods can be used to mitigate the disparate impact of inaccuracies in the results from using face recognition.”

The study, according to the authors, “is intended to help improve” the Department of Homeland Security’s (DHS) “acquisition and oversight of facial recognition technologies (FRTs) by describing their opportunities and challenges,” but “might also be of interest to other government agencies considering how to implement FRTs and to FRT researchers and developers. Specifically, the work introduces FRT privacy and bias risks and alternatives to mitigate them.”

While this particular RAND report was not explicitly contracted by DHS, it was indirectly funded by DHS. The report states up-front that the research for the study “was conducted using internal funding generated from operations” of RAND’s Homeland Security Research Division (HSRD) and HSRD’s Acquisition and Development Program.”

HSRD, RAND’s research group, operates the Homeland Security Operational Analysis Center (HSOAC) under contract to DHS. It’s one of DHS’s two Federally Funded Research and Development Centers (FFRDCs), the manager and executive agent of which is DHS’s Science and Technology Directorate (S&T). DHS’s other FFRDC is the Homeland Security Systems Engineering and Development Institute (HSSEDI), operation of which was given to the MITRE Corp. in 2009.

According to DHS, “FFRDCs act as a vehicle for special research and development contracting within the federal government. The FFRDCs provide DHS with independent and objective advice and quick response on critical issues throughout the Homeland Security Enterprise. HSSEDI and HSOAC perform high-quality research and provide advice that is authoritative, objective, and free from conflicts of interest caused by competition.”

“HSOAC conducts analyses and makes recommendations to strengthen DHS across its full set of missions to prevent terrorism and enhance security, secure and manage US borders, enforce and administer immigration laws, safeguard and secure cyberspace, and strengthen national preparedness and resiliency,” RAND says.

In their study, the four authors “describe privacy as a person’s ability to control information about them,” and that “undesirable bias consists of the inaccurate representation of a group of people based on characteristics, such as demographic attributes.” Scouring existing literature, the authors proposed “a heuristic with two dimensions.” The first is “consent status (with or without consent), and comparison type (one-to-one or some-to-many),” which they concluded “can help determine a proposed FRT’s level of privacy and accuracy.”

The objective of facial recognition technologies, they say, “is to efficiently detect and recognize people captured on camera,” and although “these technologies have many practical security-related purposes, advocacy groups and individuals have expressed apprehensions about their use.” They state their “research … was intended to highlight for policymakers the high-level privacy and bias implications of FRT systems.”

“More in-depth case studies [were used] to identify ‘red flags’ that could indicate privacy and bias concerns,” the authors said, such as “complex FRTs with unexpected or secondary use of personal or identifying information; use cases in which the subject does not consent to image capture; lack of accessible redress when errors occur in image matching; the use of poor training data that can perpetuate human bias; and human interpretation of results that can introduce bias and require additional storage of full-face images or video.”

The study’s four authors are Dr. Douglas Yeung, a social psychologist at RAND and a member of the Pardee RAND Graduate School faculty who specializes in communication styles, behaviors, and mental health when using technology; Dr. Rebecca Balebako, an information scientist at RAND from 2015-2018 and now privacy engineer at Google reviewing and advising on privacy-sensitive features related to Google accounts; Dr. Carlos Ignacio Gutierrez Gaviria, a Governance of Artificial Intelligence fellow at Sandra Day O’Connor College of Law at Arizona State University and a professor at Universidad Camilo Jose Cela who is researching “soft-law governance” of AI; and Michael Chaykowsky, a technical analyst at RAND involved in building analytics products, data collection, preprocessing, analysis, and the deployment of research models to interactive tools.

The study was grounded on a fact-finding endeavor “not intended to comprehensively introduce privacy, bias, or FRTs,” they said, noting “future work in this area could include examinations of existing systems, reviews of their accuracy rates, and surveys of people’s expectations of privacy in government use of FRTs.”

Among their principal findings, RAND pointed out, first is, “every system requires a trade-off between accuracy and privacy,” and second, “no unified set of rules governs the use of face recognition technologies.”

Systems that obtain the subject’s consent are more accurate than those that do not, systems that match one subject image with one stored image, such as from device authentication and police arrest photographs, perform verification, and systems that check one or more subject images against multiple images, such as social media identity verification and surveillance cameras, perform identification, the report concludes.

The most accurate systems have the lowest privacy risk, meaning systems that allow a person’s consent for one-on-one verification, such as passport authentication at a border or by airport security. Medium-accuracy systems with low privacy risk include visa screenings, while those with high privacy risk include detainee identification. The least accurate systems have high privacy risk and include face-in-a-crowd airport surveillance. Meanwhile, multiple laws and regulations create a disjointed policy environment, limiting the extent to which privacy and bias concerns can be mitigated for these implementations.

Yeung, Balebako, Gutierrez Gaviria, and Chaykowsky, recommend that “for any technology that gathers personally identifiable information, such as facial characteristics, in public settings, [its users should] strive to protect those data, use anonymization or other means to reduce the amount of those data available, and establish rigorous user protocols to limit unauthorized access.”

Users of biometric facial recognition technologies also must “carefully consider the composition and size of either training or targeting data sets to discern the potential for skewing face recognition algorithms,” and to “design blacklists that avoid bias and identify thresholds that produce acceptable rates of false-positive facial matches in security-related applications.”

While FRTs are primarily “designed to detect and recognize people when their images are captured by a camera lens” and that “there are many practical security-related scenarios for implementing such technology,” Yeung, Balebako, Gutierrez Gaviria, and Chaykowsky reported, they also pointed out that “these systems are not failsafe. They raise privacy and bias concerns,” the latter of which increasingly has become a rather big concern within the privacy rights space.

Indeed. As the RAND study’s authors remarked, “advocacy groups and the public at large have expressed concerns about privacy,” and “the systems’ results can be biased in ways that harm particular ethnic and racial groups.”

Therefore, Yeung, Balebako, Gutierrez Gaviria, and Chaykowsky quantified, “It is important that leaders and personnel using FRTs understand the problems associated with this technology so that they can improve policies on their acquisition, oversight, and operations.” And “identifying privacy and bias issues inherent in an FRT system as early as possible enables the mitigation of future risks,” they emphasized.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Ireland will get mDLs in 2025

There’s a wee delay on mobile driver’s licenses (mDL) in Ireland, but the government is on track for a rollout…

 

Google can depose Texas in biometric data privacy lawsuit after partial appeals win

Google will be allowed to depose Texas as it defends itself in a lawsuit brought by the state, but not…

 

Can digital identity wallets fix the identity theft and AI deepfake fraud problem?

As AI becomes a staple of the fraudster’s toolkit, driving an uptick in identity theft and deepfake fraud, some are…

 

Link EU digital ID wallet to social media accounts to end anonymity: Spanish PM

Social media accounts held in the European Union should be linked to EU Digital Identity Wallets to prevent anonymity, Spanish…

 

Dominican Republic awards contract for 5M biometric passports

The Dominican Republic is set to begin issuing biometric passports in August, president Luis Abinader has announced.  The tender for…

 

Digital transformation of healthcare gathers pace globally

The UAE and Estonia could see cooperation in the digital transformation of healthcare. Riina Sikkut, Estonia’s minister of health, said…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events