Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything
Facial recognition technology is facing a blitz of negative media with wormhole-like theories that this technology results in mass surveillance, destroys anonymity, and will forever change the way people behave in public. Advocates of this theory are calling for federal privacy regulation that will give a face a right of privacy it has never had in law to date. While using facial recognition in numerous commercial and internet applications definitely requires transparency and consent, the assumptions about what the technology can and can not do simply do not hold up to fact.
Just recently, the New York Times, stated “The fundamental concern about faceprinting is the possibility that it would be used to covertly identify a live person by name.”
The arguments essentially label a face a private part with privacy rights. This sounds quite strange, until you enter the theoretical vacuum where the symbiotic relationship between the digital and real world blurs a traditional sense of privacy and anonymity that legitimately irks reasonable people. The “sea of faces” anonymity in American city life, for example, is feared to quickly dissolve when the crossover between real and virtual life enables anyone to immediately discover the identity of every passerby just by taking their picture.
Yet the real problem is not the faces. They have always been around. What is new is the virtual world’s big data, and how that big data is correlated to faces creating personally identifiable information. That is the crux of the problem: big data is unregulated, often anonymous, and operates in no legal, geographical or virtual boundaries.
Somewhere between voluntarily providing our photos and identity information to social media, giving up our locations to GPS to get where we need to go or recover a mobile device, and providing stores our email addresses and phone numbers, the digital world has become as public as a city street, and just as voluntarily trod upon. My place of residence, my shopping habits, my marital status, and maybe even an “upskirt” pic that was unknowing taken by some creep on the Boston trolley, are unprotected by privacy law in either world. Reference a Massachusetts Supreme Court March 2014 ruling which found that “upskirting” (the practice of secretly taking photos or video under an individual’s clothing) is not a privacy violation – which seems especially absurd if a face is deemed to have a right to privacy. Interestingly, the Court also decided that the right to privacy is further diminished in a public place.
Thus, it seems, somewhere in the journey to find a culprit for diminished anonymity, the culprit has become facial recognition vendors. Yet these companies are not big data, do not usually hold any identifying information, and the technology itself does not invade privacy. All these vendors do is make algorithms (math sequences that measure facial features) and convert them to templates (a unique mathematical sequence describing an individual’s facial features) to determine whether one face template matches another, usually in or between restricted database(s).
The algorithm wand. The most dubious allegation is that the technology can wave an algorithm wand and identify any face anywhere caught on camera. Face biometric technology works well in controlled situations (good lighting, subject looking square into camera), but is difficult to employ in video or photos that do not meet international standards for pictures such as those required for passport or driver license photos. Police struggle with this every day, and it was one of the reasons it seemed to take so long to identify Boston Marathon terrorists, the Tsarnaev brothers from video surveillance: variables such as pose, lighting, expression and resolution diminished the opportunities for a match.
The same can be said of most still pictures on sites like LinkedIn for example, where even typical portrait shots are often not capable of being matched “in the wild.” For example, NameTag appears to only identify people who sign up for the service, and Facebook’s DeepFace research claims “near-human accuracy in identifying people’s faces” according to the New York Times, but the technology has yet to be deployed. Moreover, today’s testing on FaceBook matching only works with great certainty on the same group of people in batched multiple photos. That is a far cry from claiming that DeepFace can pick a random person out of a crowd on a city street and search all of Facebook in seconds with the right answer, let alone the entire internet.
Templates not universal. A related allegation is that once people are assigned a unique template, they may be identified in existing or subsequent photographs or as they walk in front of a video camera. One individual seeking to identify another individual for anything but legal reasons cannot do so. First, the images must be of sufficient quality as stated previously. Most are not. Second, the environment where the match takes places must be controlled: databases are not readily available to talk to each other in the wild. Third, and most importantly, is that the databases where images reside must give up access to the face images. These technologies can be used in a controlled environment in an attempt to match a real world pic to a Facebook or LinkedIn photo, for example. But that is only possible if, for example, Facebook grants access to its photos. Today, that would only occur in a legal, government setting. In everyday life or commercial settings, that is not possible unless big data, apps or social media like Facebook allow it to happen.
On a privacy scale, most people should be much more worried about being upskirted than someone taking a picture of their face. If the immense worldwide popularity of Facebook, Google, Flickr, Instagram, YouTube or Vimeo mean anything, a good chunk of the world’s population have voluntarily placed their face on the very public internet so others can view them. For those that do not want the attention, they avoid the internet, and rightly so. They may not always succeed, and that is another reason why responsible behavior on the part of big data, social media and mobile apps is so important. It is also important to protect people who have a reasonable expectation that their image will not be used for unwanted reasons.
This is exactly the place where the consent and transparency need to put the brakes on permitting photo harvesting. But that is not facial recognition problem; that is a big data identity correlation problem. Scapegoating facial recognition vendors doesn’t solve the privacy wormhole folks are so concerned about. Addressing the whole issue of transparency and consent in personally identifiable information collection, storage, usage and security does. And that means getting social media and mobile applications like Facebook, Google, NameTag and others to take responsibility and act with basic ethical standards is key.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.
Article Topics
blog | facial recognition | Janice Kephart | privacy
Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything http://t.co/Gz1qH5qhGJ #biometric #security
Why scapegoating face recognition tech doesn’t solve anything http://t.co/zAsrqYGIk6 via @BiometricUpdate #biometrics #FutureTech
RT @BiometricUpdate: Why scapegoating face recognition as a privacy wormhole solves nothing http://t.co/QifIL55q7z guest blog by @janicekep…
@BiometricUpdate @janicekephart @sibassc @animetrics @morpho
#BigData Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything: Yet the real… http://t.co/aqqqb7Q8pL
Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything | BiometricUpdate http://t.co/ZLiT1xcJft #biometrics
.@SIBAssoc What a fantastic post on #facialrecognition over at @biometricupdate – extremely well written! http://t.co/3LeS6TYYOw #privacy
RT @m2sys: .@SIBAssoc What a fantastic post on #facialrecognition over at @biometricupdate – extremely well written! http://t.co/3LeS6TYYOw…
@m2sys @BiometricUpdate Aww, me thanks M2SYS! ~Janice PS. I had our experts scour it for correctness before posting, so thx 4 their help!
RT @biometricupdate: Why scapegoating face recogn as #privacy wormhole solves nothing http://t.co/nyIa8UGrZO guest blog by @janicekephart
Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything |… http://t.co/LiomU4b6CP
RT @BiometricUpdate: Why scapegoating face recognition as a privacy wormhole solves nothing http://t.co/QifIL55q7z guest blog by @janicekep…
Why scapegoating face recognition technology as privacy wormhole doesn’t solve anything | BiometricUpdate http://t.co/09a4WGiDbp
Privacy advocates are sometimes caricatured as extremist or shrill, but we fall a long way short of this hysteria: “wormhole-like theories that this [facial recognition] results in mass surveillance, destroys anonymity, and will forever change the way people behave in public”.
Kephart the lobbyist misrepresents the privacy argument. And she evidently misunderstands the central privacy issue when she says ‘faces have never been private’. Data Privacy laws worldwide (the US is an infamous exception) protect Personal Information against excess collection, and undue use and disclosure. When a biometric or Big Data process attaches a name to an otherwise anonymous record (like a face) then it turns that record into Personal Information and then data protection laws come into play. It is irrelevant that ‘faces are not private’; it is the naming of data that matters.
And here’s one of the really egregious privacy problems in face recognition: biometrics applications like NameTagApp and Facebook’s tag suggestions re-purpose the photographs that are uploaded to social media sits, turning them into templates and using them to track people.
The surreptitious identification of people by their faces, using templates created in other contexts and for other reasons, represents a serious data privacy problem. I’d like Klepart to engage with that issue please.
Privacy advocates are sometimes caricatured as extremist or shrill, but we fall a l… http://t.co/cEyZdPi9Jw
RT @Steve_Lockstep: Privacy advocates are sometimes caricatured as extremist or shrill, but we fall a l… http://t.co/cEyZdPi9Jw