Biometrics group comes out swinging after NIST face-scan report
If the International Biometrics + Identity Association had wished to put skeptics and opponents of facial recognition technology on blast, it could hardly have found a better tone for it in a new report.
“This data [from a recent government report] serves to debunk the semantically-loaded misleading arguments on facial recognition performance that privacy activists have pushed in their zeal to ban a technology that enhances public safety and security,” begins the association’s report.
The trade group was jumping on findings published in December by the National Institute of Standards and Technology. In it, 189 mostly commercial algorithms from 99 developers were examined for accuracy as part of ongoing testing by the institute.
Members of the association want to persuade the government, if not people in general, that facial recognition bans and moratoriums will take a valuable and evolving tool out of the hands of law enforcement and give a technological advantage to other nations undeterred by popular opinion.
One sentence in NIST’s report that particularly caught the attention of the biometrics community in general was this: “…some developers supplied identification algorithms for which false positive differentials are undetectable.” That means some software that was tested produced virtually identical error rates when comparing photographs of two or more people, regardless of race, age or gender.
The biometrics association says that too much attention to algorithms that performed poorly is being paid by the media and skeptics. It notes that NIST’s January FRVT Verification Report lists five algorithms that, “under suitable conditions with good photos, lighting etc., have an accuracy rate of 99.9% or better.”
Digging deeper, the association claims that NIST’s 30 top-ranked identification (1-N) algorithms — the kind that searches a database for potential matches — provide “far greater accuracy than humans could ever achieve,” according to the association. The same is true of the highest-performing verification software, which compares two images for a match, it said.
In fact, the best algorithms outperform the “mean performance of all human groups including skilled forensic examiners with unlimited time and the best automated tools,” the report’s authors state.
The biometrics group further bristles at accusations that error-rate discrepancies are bias, writing, “(m)achines do not have emotions, and do what they are programmed to do.” Left unsaid is that human programmers do have points of view and blind spots.
The association concedes in the report that the poorest-performing algorithms display “significant performance difference among demographic groups,” but their capabilities are so poor that neither government nor industry would use them to begin with.
From there, the report goes on to describe how public safety roles would benefit from facial recognition systems. Software could compare millions of images to find missing children who do not know their names; identify exploited children; spot possible passport or driver-license fraud; and seek leads when a surveillance photo is the only evidence in a crime.
There is too much at stake, the association concludes, to hamper the growth of face-scanning systems, given the progress the technology has demonstrated in the NIST report.