FB pixel

Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything

 

Facial recognition technology is facing a blitz of negative media with wormhole-like theories that this technology results in mass surveillance, destroys anonymity, and will forever change the way people behave in public. Advocates of this theory are calling for federal privacy regulation that will give a face a right of privacy it has never had in law to date. While using facial recognition in numerous commercial and internet applications definitely requires transparency and consent, the assumptions about what the technology can and can not do simply do not hold up to fact.

Just recently, the New York Times, stated “The fundamental concern about faceprinting is the possibility that it would be used to covertly identify a live person by name.”

The arguments essentially label a face a private part with privacy rights. This sounds quite strange, until you enter the theoretical vacuum where the symbiotic relationship between the digital and real world blurs a traditional sense of privacy and anonymity that legitimately irks reasonable people. The “sea of faces” anonymity in American city life, for example, is feared to quickly dissolve when the crossover between real and virtual life enables anyone to immediately discover the identity of every passerby just by taking their picture.

Yet the real problem is not the faces. They have always been around. What is new is the virtual world’s big data, and how that big data is correlated to faces creating personally identifiable information. That is the crux of the problem: big data is unregulated, often anonymous, and operates in no legal, geographical or virtual boundaries.

Somewhere between voluntarily providing our photos and identity information to social media, giving up our locations to GPS to get where we need to go or recover a mobile device, and providing stores our email addresses and phone numbers, the digital world has become as public as a city street, and just as voluntarily trod upon. My place of residence, my shopping habits, my marital status, and maybe even an “upskirt” pic that was unknowing taken by some creep on the Boston trolley, are unprotected by privacy law in either world. Reference a Massachusetts Supreme Court March 2014 ruling which found that “upskirting” (the practice of secretly taking photos or video under an individual’s clothing) is not a privacy violation – which seems especially absurd if a face is deemed to have a right to privacy. Interestingly, the Court also decided that the right to privacy is further diminished in a public place.

Thus, it seems, somewhere in the journey to find a culprit for diminished anonymity, the culprit has become facial recognition vendors. Yet these companies are not big data, do not usually hold any identifying information, and the technology itself does not invade privacy. All these vendors do is make algorithms (math sequences that measure facial features) and convert them to templates (a unique mathematical sequence describing an individual’s facial features) to determine whether one face template matches another, usually in or between restricted database(s).

The algorithm wand. The most dubious allegation is that the technology can wave an algorithm wand and identify any face anywhere caught on camera. Face biometric technology works well in controlled situations (good lighting, subject looking square into camera), but is difficult to employ in video or photos that do not meet international standards for pictures such as those required for passport or driver license photos. Police struggle with this every day, and it was one of the reasons it seemed to take so long to identify Boston Marathon terrorists, the Tsarnaev brothers from video surveillance: variables such as pose, lighting, expression and resolution diminished the opportunities for a match.

The same can be said of most still pictures on sites like LinkedIn for example, where even typical portrait shots are often not capable of being matched “in the wild.” For example, NameTag appears to only identify people who sign up for the service, and Facebook’s DeepFace research claims “near-human accuracy in identifying people’s faces” according to the New York Times, but the technology has yet to be deployed. Moreover, today’s testing on FaceBook matching only works with great certainty on the same group of people in batched multiple photos. That is a far cry from claiming that DeepFace can pick a random person out of a crowd on a city street and search all of Facebook in seconds with the right answer, let alone the entire internet.

Templates not universal. A related allegation is that once people are assigned a unique template, they may be identified in existing or subsequent photographs or as they walk in front of a video camera. One individual seeking to identify another individual for anything but legal reasons cannot do so. First, the images must be of sufficient quality as stated previously. Most are not. Second, the environment where the match takes places must be controlled: databases are not readily available to talk to each other in the wild. Third, and most importantly, is that the databases where images reside must give up access to the face images. These technologies can be used in a controlled environment in an attempt to match a real world pic to a Facebook or LinkedIn photo, for example. But that is only possible if, for example, Facebook grants access to its photos. Today, that would only occur in a legal, government setting. In everyday life or commercial settings, that is not possible unless big data, apps or social media like Facebook allow it to happen.

On a privacy scale, most people should be much more worried about being upskirted than someone taking a picture of their face. If the immense worldwide popularity of Facebook, Google, Flickr, Instagram, YouTube or Vimeo mean anything, a good chunk of the world’s population have voluntarily placed their face on the very public internet so others can view them. For those that do not want the attention, they avoid the internet, and rightly so. They may not always succeed, and that is another reason why responsible behavior on the part of big data, social media and mobile apps is so important. It is also important to protect people who have a reasonable expectation that their image will not be used for unwanted reasons.

This is exactly the place where the consent and transparency need to put the brakes on permitting photo harvesting. But that is not facial recognition problem; that is a big data identity correlation problem. Scapegoating facial recognition vendors doesn’t solve the privacy wormhole folks are so concerned about.  Addressing the whole issue of transparency and consent in personally identifiable information collection, storage, usage and security does. And that means getting social media and mobile applications like Facebook, Google, NameTag and others to take responsibility and act with basic ethical standards is key.

DISCLAIMER: BiometricUpdate.com blogs are submitted  content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.

 

Article Topics

 |   |   | 

Latest Biometrics News

 

EU AI pact sets new standards for ethical AI use across Europe

By Tony Porter, Chief Privacy Officer at Corsight AI The European Union’s AI Pact marks a crucial step towards forming…

 

Deepfake detection challenge, integration to protect content integrity unveiled

A new deepfake detection competition has been announced with the intention of advancing “next-generation deepfake detection and localization systems” development….

 

Utah judge blocks age verification requirement for social media

A federal judge in Utah has ruled in favor of tech lobby group NetChoice and against the state’s new law…

 

Google announces beta test for digital IDs based on biometrics and US passports

A new type of digital ID based on U.S. passports in Google Wallet has been introduced ahead of beta testing….

 

Biometrics startups address pressing industry challenges in pitch competition

A group of biometric and digital identity startups went head-to-head in a pitching competition at Identity Week on Wednesday and…

 

Mindy Support builds biometric dataset with 1M face images for large US tech firm

Mindy Support, a provider of data annotation and customer service solutions, has compiled a database of face images to train…

Comments

16 Replies to “Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events