‘Master faces’ make authentication ‘extremely vulnerable’ — researchers
Ever thought you were being gaslighted by industry claims that facial recognition on its own is trustworthy for authentication and identification? Maybe you have been.
Israeli researchers say in a new paper that they created a neural network that generates faces that work like master keys on a janitor’s ring.
The synthetic faces are created with enough features common to people that they would fool verification algorithms protecting more than half of the identities in a test dataset.
The team included three scientists from Tel Aviv University’s Blavatnik School of Computer Science and the School of Electrical Engineering.
One particular approach, which they write was based on Dlib, created nine master faces that unlocked 42 percent to 64 percent of a test dataset. The team also evaluated its work using the FaceNet and SphereFace, which like Dlib, are convolutional neural network-based face descriptors.
Faces were generated with a pretrained StyleGAN generative adversarial network model.
They say a single face passed for 20 percent of identities in Labeled Faces in the Wild, an open-source database developed by the University of Massachusetts. That might make many current facial recognition products and strategies obsolete.
Using Deepfake algorithms could be more effective by enabling master faces to pass some liveness tests, according to the paper.
It is unclear how this news might impact ongoing U.S. government facial recognition vendor 1:1 verification tests. But the Israeli researchers concluded that the use of face biometrics for authentication, at least, “is extremely vulnerable, even if there is no information on the target identity.”