What everyone needs to know about facial recognition
The National Press Foundation recently held an event to help inform the news media about facial recognition. How fully it wants to inform people in the media, however, seems open to debate.
A pair of speakers from the Center for Democracy & Technology presented ‘What Journalists Need to Know about Facial Recognition.’
CDT President Alexandra Reeve Givens distinguishes between verification and identification, and she and Jake Laperruque focus on the latter in their talk.
Surveillance of protestors is highlighted as a major concern. Reeve Givens warns of the persistence of biometrics as a means of identification and the nature of facial recognition as a modality which is hard to avoid, if deployed in a real-time application.
A distinction which is less clear in the talk is between live facial recognition in public spaces and forensic searches by law enforcement.
Several important concerns are raised, such as when Laperruque points out that forensic facial recognition matches are not always disclosed by police, and the growth of online services that are ripe for abuse.
The phrase “wild west” is used in the summary of the video, and while Laperruque admits the environment has changed over time, no reference is made to industry or regulator efforts to establish best practices and guidelines for ethics and responsible use.
How wild is the West?
Biometric Update reached out to the International Biometrics + Identity Association to fill in several gaps in the picture of facial recognition use painted by the CDT speakers.
IBIA Managing Director Robert Tappan says that members in the group are highly engaged in discussions about the ethics of various deployments of facial recognition and other biometrics, and how to ensure potential harms are recognized and mitigated.
Unfortunately, these discussions do not come up in many articles purporting to provide an overview of how the technology is used. None of the articles above or below note the existence of good practice guidelines and other industry documents published to ensure appropriate implementation of the technology.
ABC News warns that an incident in which a ticket-holder was refused entry to an event based at Radio City Music Hall based on a policy to exclude anyone even distantly connected to legal conflict with the facility’s parent company. Radio City Music Hall is owned by MSG Entertainment, whose principal stakeholder is the notoriously petty James Dolan.
“The allegations that have been recently reported regarding the use of face recognition to identify and ferret-out adversaries or ‘enemies’ and ban or exclude them from places like public entertainment venues is troubling,” says Tappan. “This is not what this technology was developed for. It was developed for the legitimate and lawful use to authenticate people’s identity and protect against fraud or misrepresentation, or to mitigate vandalism and damage.”
A TED Talk by Wall Street Journal Reporter Parmy Olson, hosted by NPR, purports to explain how businesses are using facial recognition.
The first use case she talks about is one of facial analysis, rather than identification, as she notes. How the technology is used in China is quickly invoked.
The British Security Industry Association says the number of surveillance cameras installed in the UK has grown to about 10 million, and Olson points out that they can be used with facial recognition. Olson warns of the potential for tracking by businesses with facial recognition much as has been normalized in the online environment.
Olson points out a website’s attractiveness scoring feature as another questionable application of facial recognition.
A deployment of Facewatch technology at Budgens is considered, and Olson points out the company’s use of ethnicity analytics, which is planned as a premium feature.
Olson mentions that inclusion in the watchlist is not based on an arrest or judicial process, but leaves out that it is typically offered to people caught shoplifting as an alternative to being referred to the police.
The team behind the shelved iBorderCtrl pilot at EU borders is looking for commercial customers, Olson warns, and she casts doubt on the accuracy of facial recognition technology by noting the use of a super-recognizer to compliment an algorithmic system at a recent event, and the success of spoof attacks on phone unlocking systems. Olson does not mention the use of liveness detection in KYC applications, or the common industry best practice guidance of combining human and automated systems for some use cases.
“As with any technology, there is always a chance of error,” Tappan acknowledges. “No technology is 100 percent accurate all of the time. The bottom-line, though, is that there are far more instances of human error and bias than there is with current face recognition technology. IBIA and its members stand for the responsible use of all biometric and identity technologies — not only face recognition, but fingerprint, voice, iris, DNA and other modalities as well.”
Olson’s final warning seems to be about corporate use of facial analysis, but in theme consistent among the articles mentioned here, no distinction between identification, analysis, or verification is made.
“One of the biggest errors that some writers make about face recognition is conflating identity authentication or verification with the term ‘surveillance,’ Tappan explains. “Verifying someone’s identity with face recognition when they go through a security checkpoint, or as they board a plane, seek entry into a building, or go through passport control or customs is not surveillance. That’s authentication.
“A number of so-called privacy advocates and anti-face recognition proponents have conflated these two different things, with a lot of hyperbole. One of the more vivid pictures they paint regarding face recognition is that it’s a technology that’s purportedly taking us down some very bad dystopian, ‘Minority Report’-type road. That’s unfortunate, not to mention wrong.”
A wrongful arrest and the Radio City Music Hall incident form the basis for an article by Axios decrying “alarming pitfalls.” The article includes a comment from a single observer — an advocate of banning facial recognition.
A Fox News editorial invokes Russia and China as countries where facial recognition is used in a highly partisan interpretation of the long-standing expansion of the technology at U.S. airports. The many democratic countries where the technology is used at airports and elsewhere are, as in the cases above, simply left out.
“Many reporters fail to mention that programs like TSA PreCheck, Clear, Global Entry and the like are all ‘opt-in’ — and so is the use of the expanded passenger face recognition pilot program just rolled out by TSA,” Tappan notes. “Passengers can opt-out and request identity verification by a TSA officer instead of by a camera. Customers have a choice as to whether nor they want to participate.”
Similarly, members of the public and the media have a choice about whether or not to participate in public dialogue around emerging technologies like facial recognition. The choice to participate, however, creates a responsibility to gather information from different perspectives. Not just those most complimentary to a predetermined final judgement.