Another town bans government use of facial recognition, cites dubious inaccuracy claim
Facial recognition technology has been banned form government surveillance use in Brookline, making it the second town in Massachusetts, alongside Somerville, to follow in the footsteps of other U.S. cities who have said no to the technology, writes WGBH.
During a town meeting last week, only eight opposed and 12 abstained, while 179 voted for the tech ban.
According to Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, this is an important win in the protection of civil rights which she hopes will lead to more bans across the state.
“We are losing control of our personal information because our technology has outpaced our civil rights protections in the law. We hope that the state legislature on Beacon Hill will take note of all of this energy in communities across Massachusetts,” Crockford said.
“Communities are saying we should be in control, we should be dictating how, if at all, these dangerous technologies will be used by our town and city governments. We hope that the legislature will listen and will take action to protect all of us throughout the Bay State.”
Facial recognition advocates say the technology could have a positive contribution to society, and some were present at the meeting to state their case, according to a video available on Twitter.
A study conducted by MIT in 2018 is heavily cited by facial recognition opponents, and was also mentioned by Crockford. The report claimed there was gender and racial bias in the technology, and Crockford also cited a more recent 2019 report from the University of Essex, which examined six live trials, found an 81 percent error rate in the results, respectively 4 in 5 false positives, when the technology was used by the London Metropolitan Police.
When asked about the results, the Chair of the Metropolitan Police staff association Ken Marsh said the figure is absurd.
According to Allevate, individuals who cite figures of 50-90 percent inaccuracy do not, in fact, give proper interpretation to the data, because the false positive rate measures the number of false positives corresponding to the total number of comparisons, and not the total matches. “Error rate,” as measured by the University of Essex, implies that the technology is expected to match identities with 100 percent certainty, rather than to identify potential matches for further investigation.
The real accuracy figure in the case considered by the University of Essex, according to Allevate, is in fact a false accept rate of below 0.5 percent with a false rejection rate of below 0.1 percent, based on a watchlist of more than one person.
By setting up an example with marbles, Allevate says that an error rate of 81 percent implies that out of a bucket of 1,000 marbles with 6 sought red ones, the system selects more than 800 for further investigation, failing to make the search more efficient.