Public discussions of biometrics use short on nuance, experts say in IBIA podcast on facial recognition
How accurate is accurate enough for facial biometrics? The answer is that an appropriate use policy takes the accuracy and limitations of any biometric system into account, according to an industry expert interviewed in the latest ID NOW podcast on “Shifting Norms for Facial Recognition” from the International Biometrics + Identity Association (IBIA). In the case of law enforcement, that means establishing probable cause with evidence other than facial recognition in all cases.
The episode is hosted by Information Technology and Innovation Foundation (ITIF) Vice President Daniel Castro, and features Idemia National Security Solutions (NSS) Senior Director James Loudermilk and Senior Vice President of Corporate Development Christian Schnedler. Their starting premise is that law enforcement organizations require modern tools to work effectively in the modern world.
Loudermilk notes a study in Australia which showed a 6 percent false reject rate and 14 percent false acceptance rate for human examiners comparing people with passport photos. This degree of accuracy can be significantly surpassed by facial recognition algorithms, but not all of them.
“Algorithms vary widely in performance,” Loudermilk states. “Some are literally less accurate than a coin toss, so proper selection does matter.”
Schnedler discusses how law enforcement agencies actually use of facial recognition, and the history of police use of it. While the main concern for many people seems to be the application of facial biometrics to live surveillance camera feeds, Schnedler notes that in practice this is an edge case, and that the main application continues to be forensic searches carried out as part of a criminal investigation.
The largest change in use explained by Schnedler is the examination of online materials, such as a social media post proudly announcing a criminal act.
The impact of the pandemic on people’s willingness to pass identity documents to others is discussed, as is the accuracy of facial recognition systems for people wearing masks. Loudermilk points out that more companies producing algorithms have made claims about their accuracy than have provided data to back those claims.
Concerns about false identification are reasonable, Loudermilk says, though he explains that during his time with FBI, the agency rooted out false positive matches with fingerprint biometrics to the point that no-one had been wrongly accused based on the agency’s biometric matching in more than 30 years. The rejection of national ID systems in the U.S. is seen as proof that the country will resist shifts towards authoritarian governance using facial recognition as is seen in China. Even the minimal standards for driver’s licenses that have been applied to states have taken a full decade to be widely implemented.
Related issues around cell-phone tracking have also already been addressed in the Supreme Court.
By contrast, the present circumstances create what Schnedler characterizes as a “schizophrenic” environment for law enforcement agencies, in which they are unsure of parameters for generally accepted use of facial recognition, resulting in widely divergent practices. The legal landscape, the use of body cameras, and oversight of law enforcement technology use are also covered in the conversation. On Clearview AI, Schnedler notes the risk to police using it of what is sometimes referred to as “fruit from the poisonous tree,” or tying an investigation to data which is not admissible.