Biometrics expert discusses facial recognition regulation as Massachusetts’ proposed ban moves forward
A bill has been passed by the State Senate of Massachusetts which could block biometric facial recognition from use by police in the state until the end of 2021, the Wall Street Journal reports.
The proposed legislation would involve creating a special commission to study the technology and recommend regulations to control its application in law enforcement. The bill is now before the state House, and virtual hearings on it are expected to proceed soon, possibly even this week. The House version of the bill may or may not include the moratorium, however, according to State Senator Cynthia Creem, who co-authored the Senate version.
When it was introduced, that version was panned as “the most far-reaching bill” proposed anywhere in the U.S. by International Biometrics + Identity Association (IBIA) Executive Director Tovah LaDier.
Several municipalities in Massachusetts have set restrictions on the use of facial recognition by local police, including Springfield earlier this year.
The legislative session ends July 31.
The question of ‘How to Regulate Face Recognition’ was examined by University of Massachusetts Professor Erik Learned-Miller in a webinar hosted by Laura Haas, dean of UMass Amherst’s College of Information and Computer Sciences.
Learned-Miller is the creator of the ‘Labeled Faces in the Wild’ dataset, and an award-winning facial recognition researcher.
The “Face Recognition & Regulation: Q & A with Professor Erik Learned-Miller” webinar was presented by UMass Amherst following the release of the ‘Facial Recognition Technologies in the Wild: A Call for a Federal Office,’ white paper in June, which proposes the establishment of an FDA-style federal office for regulating facial recognition and related AI technologies.
Learned-Miller began by explaining why in his view the regulation of facial biometrics is an issue which affects everyone. He provided a brief timeline of recent events in the field, including the impact of neural networks and the increasing expression of concerns shortly after. He says discussions around an “ultimate training set” or “ultimate benchmark” to solve accuracy and demographic disparity challenges led to the realization that “we’re never going to make this technology perfect.”
Instead, the thing to do is manage the errors that it does make.
The U.S. Food and Drug Administration was put in place to deal with a marketplace that literally included “snake oil” sales. Learned-Miller uses the example of Thalidomide, a drug approved in other places, but blocked by an astute FDA data scientist, who asked for more data, particularly about its safety for pregnant women. This move highlights the importance of context, and resulted in birth defects and other adverse outcomes being prevented for Americans. Importantly, Thalidomide is currently used in the U.S., under specific conditions.
Learned-Miller also talked about the recent wrongful arrest of a man in Detroit, and while he notes that “probably its more reasonable in this case to attribute it to an error in how the technology was used,” he moves from there to the question of whether all facial recognition technology should be banned.
He enumerates several beneficial uses of the technology, from medical imaging to finding missing children, and sets a goal of regulating good uses while preventing bad ones.
During the question-and-answer period, members of the audience asked about other biometrics, but also asked questions suggesting a direct and exclusive equation of facial recognition with tracking.
Discussing challenges related to other biometrics in law enforcement, Learned-Miller notes that there were difficulties when DNA evidence was first introduced for court cases, and he asserts that confidence assessments for facial recognition are “often nearly meaningless.”
In response to a question from Biometric Update, Learned-Miller said that GDPR-style regulation could mitigate some potential harms from facial recognition, but would not address police use of the technology, for instance. MIT facial recognition researcher Joy Buolamwini, who co-authored the white paper with Learned-Miller, asked if he would support a moratorium, and he responded that such a move seem reasonable at the moment, though he agrees with those who would limit it to non-consensual use cases.
Something needs to be done, Learned-Miller says, because as he engages in discussions about facial recognition with more people, he hears more and more about harms that have already arisen from its application.
Cities continue to act
New Orleans City Council is considering an ordinance that would ban facial recognition and license plate readers in the city, and set up a new approvals and reporting process for surveillance technologies, The Lens writes.
The proposal is in response to Smart City plans that originally included the installation of 146 new public surveillance cameras. The number of planned cameras has since been reduced to 90.
Under the proposal, new surveillance technology would have to be approved by City Council before being deployed, and existing technology would have to be approved within a year of its adoption. Granted approvals would last for three years, and entities using approved surveillance technologies would be obliged to report on their impact.
The city’s Office of Homeland Security and Emergency Preparedness already has a policy preventing the use of facial recognition on its cameras.