FB pixel

The fallacy of hacked face biometrics’ vulnerability

Templates are safer than social media profiles
The fallacy of hacked face biometrics’ vulnerability
 

Biometric data is personal data. It is sensitive personal information. It can be used by hackers to open accounts using another person’s face, and in combination with a breached ID document, using a victim’s identity.

You cannot change your face if an image of it is stolen.

These facts are sometimes presented together as an argument that face biometrics should not be used, or that biometric templates are among the most dangerous type of data that can be breached.

These arguments are invalid. Their conclusions do not follow from the facts.  Stolen face biometrics templates are not a greater risk to the privacy and account security of the subject than the photo on their social media profile; in fact, quite the opposite.

What’s in a hack?

The form of face biometrics that is most useful to hackers is raw photographs, like the kind found on many social media profiles. In other words, public Facebook, Instagram, TikTok and LinkedIn accounts are a larger, more useful trove of face biometrics data for hackers than any database in the world.

Many media pundits and policymakers are confused on this point.

The familiar argument came up recently, when U.S. Congresswoman Jan Schakowsky noted “You can’t change that information” in criticizing what she sees as a lack of consumer protections in the proposed American Privacy Rights Act (APRA).

Data breaches are major contributors to fraud, primarily because they furnish cybercriminals with the non-biometric data they need to complete applications for fraudulent public benefits, bank accounts, or other services. And there are incidents when biometric data has been left unencrypted and exposed to cybercriminals.

But in most cases, properly stored biometric data has no value, for two different reasons.

The first is that biometric matches in and of themselves typically play only part of the role of confirming an identity claim, one which on its own is insufficient. The way many identity security professionals explain the distinction is by comparing the biometric to a username, and the liveness test to a password.

The second reason is that biometric systems architected and managed according to best practices only store biometric data in encrypted form, meaning that unlike the aforementioned social media images, they cannot be simply resubmitted as a spoof of the subject’s identity.

What’s in the honeypot?

As David Birch points out in Forbes, templates “are much more secure because they do not store the biometric itself but an abstraction of it.” It does not eliminate the risk, he notes, but it dramatically reduces the ease, cost-effectiveness and scalability of attacks based on stolen templates.

For systems that require large numbers of biometric templates to be collected together in a giant honeypot, there are template protection methods on offer, with more in development. These include advanced technologies like homomorphic encryption and multi-party computation which could provide protection against future attacks. In the meantime, standard template encryption, while theoretically breakable, has proven sufficient in practice to keep them off of the dark-web marketplaces where breached data proliferates.

The other reason listed above refers to a best practice that literally every organization using biometrics for security should follow: the implementation of biometric liveness and presentation attack detection.

Some policymakers and members of the media steadfastly avoid mentioning these technologies, even when discussing the problem they address. An article from TechRadar last year refers to a NordVPN report citing the immutability of fingerprints, and recommends two-factor authentication and strong passwords over biometrics for app security. Liveness and PAD are conspicuously absent.

Awareness of how biometrics work in practice does appear to be rising, however. The Register asked Gartner VP Analyst Akif Khan about the security of selfie authentication, and he noted that liveness checks make even the improperly stored facial images Resecurity recently discovered of Singaporeans on the dark web useless. Even TechRadar seems to have caught on to the role of liveness, just not its implications.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Swiss digital ID backed by major political parties ahead of autumn referendum

Switzerland’s planned national digital identity has received support from a broad parliamentary alliance that includes representatives from almost all the…

 

Lufthansa, BigBear.ai and HID fly the future of digital transformation

The Lufthansa Group app is paving the way for air travel with new features and digital functionalities that interface with…

 

Clear plans for enterprise biometrics growth with new product name, partners

Clear has signed up T-Mobile as the first publicly-announced customer of its digital identity verification platform with biometric multi-factor authentication…

 

Continued innovation needed to effectively address sophisticated financial fraud

A Dark Economy Survey carried out by behavioral biometrics firm BioCatch has highlighted the disturbing trend of how AI is…

 

Humanity Protocol CEO talks Moongate acquisition, expansion into ticketing

Humanity Protocol has acquired Moongate, marking a move into the ticketing and access market. For Terence Kwok, CEO of the…

 

Half a million shoplifters can’t be right

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner When Napoleon said that we were a nation of shopkeepers,…

Comments

2 Replies to “The fallacy of hacked face biometrics’ vulnerability”

  1. Actually you’re debunking a non-myth.
    The potential vulnerability of biometric templates is not that they might be “resubmitted as a spoof of the subject’s identity”. No attacker does that. The risk is that access to a template and the matching algorithm enables a Hill Climbing attack to generate a fake face that spoofs the target face.
    Having said that, Hill Climbing attacks are now obsolete thanks to generative AI. To spoof a biometric, the attacker can generate a fresh Deep Fake (even a synthetic moving image) of the target using any photo of the person found in public.

  2. Good point, Stephen. Thanks for explaining that attack method.

    My objection to the statements from politicians and media outlets like the examples above is that they are suggesting risk from a form of attack which, as you say, is imagined.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events