Moves to limit facial recognition reflect concerns, and maybe misconceptions about bias
Multiple state governments are taking action to restrict the use of facial recognition, supported by advocacy groups and motivated in significant part by a desire to prevent disproportionate negative impacts on visible minorities and women due algorithmic bias.
There is a common flaw in the reasoning of these organizations, however, Stewart A. Baker writes in a Lawfare post. Baker is a former general counsel for the U.S. National Security Agency and assistant secretary for policy at the Department of Homeland Security, and currently partner at Steptoe & Johnson LLP.
He traces arguments that facial recognition is ‘biased’ or ‘racist’ back to an IEEE study from 2012, and notes “massive gains in accuracy” observed by NIST between then and 2018. Good algorithms used properly showed error rates below 0.2 percent in those tests, with most errors cause by age or injuries, not race or gender.
Data from CBP field deployments using country of origin as a proxy for race (because the agency does not collect race data) shows a “negligible” effect on biometric match accuracy.
The potential harms remaining from facial recognition “bias are by and large both modest and easy to control,” Baker writes.
He notes that there are pharmaceuticals that work better for one gender than another, and protocols are used to minimize related risks.
“For all the intense press and academic focus on the risk of bias in algorithmic face recognition, it turns out to be a tool that is very good and getting better, with errors attributable to race and gender that are small and getting smaller—and that can be rendered insignificant by the simple expedient of having people double check the machine’s results by using their own eyes and asking a few questions,” he concludes.
Alabama and Maryland move towards restrictions
A bill has passed through Alabama’s Senate with unanimous support which would require law enforcement agencies to obtain a warrant to use facial recognition in real-time or near real-time deployments, ongoing surveillance or persistent tracking, as the Tenth Amendment Center reports.
Senate Bill 56, passed by a 30-0 vote, would impact federal facial recognition programs by denying them a source of data, according to the report. The Tenth Amendment Center urges Alabama’s lawmakers to pass the bill into law, also citing the possibility of biased results based on dubious ACLU testing.
In Maryland, house Bill 259 would apply restrictions to private-sector uses of biometrics, along the lines of Illinois’ Biometric Information Privacy Act (BIPA), including setting up a private right of action, StateScoop reports.
The ‘Commercial Law – Consumer Protection – Biometric Identifiers Privacy’ bill would require informed written consent from data subjects for the use of their biometrics, and the publication of data retention and destruction plans by companies using the technology.
Lead sponsor Sara Love cites an alleged track record of misidentification of certain groups, like Black people, and refers to an incident in which a girl was ejected from a Detroit-area roller rink after being falsely matched to a banned individual. The Electronic Privacy Information Center (EPIC) is urging legislators to enact the law.
The bill must advance through at least two more sessions before reaching a vote.
New concerns and the same old allegations
The American Civil Liberties Union (ACLU) writes about three key problems with government facial recognition use, with specific reference to “biased biometrics,” as well as two arguments around the options available and business arrangements involved.
People on the wrong side of the digital divide, such as those without strong enough broadband for live video interviews, or using desktop computers without webcams, will find the use of ID.me increases their barriers to using the IRS online service, the ACLU writes. Further, outsourcing places a huge amount of public information in ID.me’s databases, without adequate protections for that data. The company is not subject to the same oversight measures as government agencies, the ACLU points out.
The group’s third argument is that: “Face recognition is generally problematic; it is often inaccurate and has differential error rates by race and gender, which is unacceptable for a technology used for a public purpose.” This sentence links to NIST testing led by Patrick Grother, who told Biometric Update that generalizations are not helpful. That test said that differentials in false positives by the best algorithms are “undetectable.”
Article Topics
accuracy | biometric identification | biometric-bias | biometrics | facial recognition | legislation | regulation | United States
Comments