FB pixel

The fallacy of hacked face biometrics’ vulnerability

Templates are safer than social media profiles
The fallacy of hacked face biometrics’ vulnerability
 

Biometric data is personal data. It is sensitive personal information. It can be used by hackers to open accounts using another person’s face, and in combination with a breached ID document, using a victim’s identity.

You cannot change your face if an image of it is stolen.

These facts are sometimes presented together as an argument that face biometrics should not be used, or that biometric templates are among the most dangerous type of data that can be breached.

These arguments are invalid. Their conclusions do not follow from the facts.  Stolen face biometrics templates are not a greater risk to the privacy and account security of the subject than the photo on their social media profile; in fact, quite the opposite.

What’s in a hack?

The form of face biometrics that is most useful to hackers is raw photographs, like the kind found on many social media profiles. In other words, public Facebook, Instagram, TikTok and LinkedIn accounts are a larger, more useful trove of face biometrics data for hackers than any database in the world.

Many media pundits and policymakers are confused on this point.

The familiar argument came up recently, when U.S. Congresswoman Jan Schakowsky noted “You can’t change that information” in criticizing what she sees as a lack of consumer protections in the proposed American Privacy Rights Act (APRA).

Data breaches are major contributors to fraud, primarily because they furnish cybercriminals with the non-biometric data they need to complete applications for fraudulent public benefits, bank accounts, or other services. And there are incidents when biometric data has been left unencrypted and exposed to cybercriminals.

But in most cases, properly stored biometric data has no value, for two different reasons.

The first is that biometric matches in and of themselves typically play only part of the role of confirming an identity claim, one which on its own is insufficient. The way many identity security professionals explain the distinction is by comparing the biometric to a username, and the liveness test to a password.

The second reason is that biometric systems architected and managed according to best practices only store biometric data in encrypted form, meaning that unlike the aforementioned social media images, they cannot be simply resubmitted as a spoof of the subject’s identity.

What’s in the honeypot?

As David Birch points out in Forbes, templates “are much more secure because they do not store the biometric itself but an abstraction of it.” It does not eliminate the risk, he notes, but it dramatically reduces the ease, cost-effectiveness and scalability of attacks based on stolen templates.

For systems that require large numbers of biometric templates to be collected together in a giant honeypot, there are template protection methods on offer, with more in development. These include advanced technologies like homomorphic encryption and multi-party computation which could provide protection against future attacks. In the meantime, standard template encryption, while theoretically breakable, has proven sufficient in practice to keep them off of the dark-web marketplaces where breached data proliferates.

The other reason listed above refers to a best practice that literally every organization using biometrics for security should follow: the implementation of biometric liveness and presentation attack detection.

Some policymakers and members of the media steadfastly avoid mentioning these technologies, even when discussing the problem they address. An article from TechRadar last year refers to a NordVPN report citing the immutability of fingerprints, and recommends two-factor authentication and strong passwords over biometrics for app security. Liveness and PAD are conspicuously absent.

Awareness of how biometrics work in practice does appear to be rising, however. The Register asked Gartner VP Analyst Akif Khan about the security of selfie authentication, and he noted that liveness checks make even the improperly stored facial images Resecurity recently discovered of Singaporeans on the dark web useless. Even TechRadar seems to have caught on to the role of liveness, just not its implications.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Growing role of biometrics in everyday life demands urgent deepfake response

Biometrics are becoming more entrenched a couple of market segments, though not as fast as some would like. The top…

 

PNG expands mandatory digital ID to businesses taking gov’t contracts

The government of Papua New Guinea is making its national digital ID a mandatory form of authentication for all business…

 

Imply reaches face biometrics milestone at tech-forward Arena da Baixada

Imply Tecnologia’s facial recognition model has enabled more than 1 million accesses at Arena da Baixada, the home of Club…

 

Following IPO, ROC is investing in homegrown security for US market

In February, Colorado-based biometrics and vision AI provider ROC closed the first big biometrics IPO of 2026, raising just over…

 

Jumio expanding biometric reusable digital identity across LatAm

Following a launch in Brazil last year, U.S.-based Jumio is expanding its face biometrics-based reusable digital identity product, selfie.DONE, across…

 

Denmark imposes age checks to restrict social media to kids under 15

Welcome two more Europeans nations to the global age assurance legislation party. The Danish government is moving ahead with an…

Comments

3 Replies to “The fallacy of hacked face biometrics’ vulnerability”

  1. Actually you’re debunking a non-myth.
    The potential vulnerability of biometric templates is not that they might be “resubmitted as a spoof of the subject’s identity”. No attacker does that. The risk is that access to a template and the matching algorithm enables a Hill Climbing attack to generate a fake face that spoofs the target face.
    Having said that, Hill Climbing attacks are now obsolete thanks to generative AI. To spoof a biometric, the attacker can generate a fresh Deep Fake (even a synthetic moving image) of the target using any photo of the person found in public.

  2. Good point, Stephen. Thanks for explaining that attack method.

    My objection to the statements from politicians and media outlets like the examples above is that they are suggesting risk from a form of attack which, as you say, is imagined.

  3. The more information that’s provided to third-parties, the higher the risk of getting hacked. Couple this with using a data point that can’t be altered once it’s hacked and you’re just setting yourself up for a world of hurt.
    This is the same problem with password managers. Do you really want to rely on a third-party that’s a huge target for hackers to control your passwords? Many of these have already been hacked. You have to hack into my brain for the variable time dependent rules that I use to create passwords. You have to break into my house to steal the device that I use to identify myself to banking websites. Only do online banking from a directly connected Ethernet home network. Don’t do online banking using a wifi connection. Don’t use phone applications to do banking. Putting credit locks and alerts with credit agencies is the most important thing that you can do to protect yourself from fraud. All this other stuff just opens yourself to not just yourself being hacked but any third-party to whom you provide sensitive personal information.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events