Biometrics experts call for creation of FDA-style government body to regulate facial recognition
Could a U.S. government agency modeled on the Food and Drug Administration that categorizes biometric facial recognition technologies according to how risky they are and proscribes controls accordingly meet the regulatory challenges associated with the technology?
A team of experts in the field, led by Erik Learned-Miller of the University of Massachusetts Amherst, have written a 57-page white paper titled “Facial Recognition Technologies in the Wild: A Call for a Federal Office,” which outlines challenges associated with the technology, the key concepts behind the FDA model, and the regulatory environment.
Learned-Miller mentions several potential problems that could arise from the technology, including privacy breaches, surveillance, unequal performance with different populations, and profiling. Given the high-stakes situations in which facial recognition is used, such as law enforcement, financial and employment decisions, he says, “harms from inaccuracies or misuse are a real and growing problem.”
“People have proposed a variety of possible solutions, but we argue that they are not enough. We are proposing a new federal office for regulating the technology,” Learned-Miller states. “We model it after some of the offices in the Food and Drug Administration for regulating medical devices and pharmaceuticals.”
An independent body could address the facial recognition ecosystem as a whole, based on the precedent set by the FDA for centrally regulating complex technologies that have implications for society as a whole. A central claim by the researchers is that addressing the trade-offs of risks and benefits from facial recognition requires that a new federal office be stood up.
Using the FDA example, the researchers outline how general controls can apply to all technologies in a group, as they do for FDA-approved medical devices. Special controls apply to just under half of all devices, as they are considered medium- or high-risk, while premarket notifications are required for the riskiest devices, along with general and special controls.
Recommendations include a similarly hierarchical structure for controls, requiring manufacturers to specify the intended use of each facial recognition application, and that the federal body have the combination of independence and expertise to evaluate applications by safety and effectiveness. Further recommendations address the riskiest deployments, adverse affects reporting, and inclusion of demographic differences in risk assessments.
Becoming Human: Artificial Intelligence has published a primer to go with the researchers’ white paper, which discusses how the technology works and is used for a non-technical audience. The primer provides definitions for industry terms
Learned-Miller and a pair of colleagues were awarded by the International Conference on Computer Vision last year for their work on the Labelled Faces in the Wild dataset.
His partners in composing the white paper include Joy Buolamwini of MIT Media Lab, who also founded the Algorithmic Justice League, University of Virginia Computer Scientist Vicente Ordóñez, and Jamie Morgenstern of the University of Washington. A MacArthur Foundation grant supported the project.