Researchers develop neural network training method to generate effective fingerprint fakes
A team of American academic researchers has developed a neural network to generate artificial fingerprints which can produce false matches on rolled and capacitive biometric verification systems. The researchers were able to launch a successful “dictionary attack” on a rolled fingerprint system with 23 percent false matches against a matcher with the false match rates (FMR) set to 0.1 percent.
The researchers built on the previous development by three of their team of MasterPrints, which are images generated from common fingerprint features. Latent Variable Evolution is presented in a paper (PDF) as a method of training a Generative Adversarial Network to generate fingerprint images which the researchers call DeepMasterPrints, which are built to the image-level from common features. The method takes advantage of the partial prints typically captured by fingerprint sensors, which mean it is not necessary to spoof the entire finger to produce a false match.
The team was led by Philip Bontrager of New York University, with other researchers from NYU and Michigan State University.
The Bozorth3 matcher falsely matched 23.1 and 89.7 percent of rolled DeepMasterPrints at 0.1 percent and 1 percent FMR, respectively. With a capacitive dataset, an Innovatrics matcher falsely matched 3.6 and 25.3 percent of DeepMasterPrints at 0.1 and 1 percent, respectively, and a VeriFinger matcher falsely matched 22.5 and 76.7 percent at the same respective FMRs. The relative similarity of the results of the capacitive DeepMasterPrints with the Bozorth3 and more recent Innovatrics matchers leads the researchers to hypothesize that the capacitive prints may be generated using universal patterns not specific to a particular verification system.
In an email, Synaptics Vice President of Marketing Godfrey Cheng told Biometric Update that the possibility of a “Master Fingerprint” type of attack has long been foreseen. The company considered such a threat vector when it decided to invest in larger sensors, which Cheng argues are fundamentally safer against such an attack. Synaptics also uses neural networks and machine learning to combat spoofs as well, he writes, by rejecting any material the spoof could be applied to as an imitation finger.
“Furthermore, should a new spoof material arise that can defeat our current matcher, we would simply train our Quantum Matcher to learn and distinguish such a new material from a real finger,” Cheng comments. “Then we would provide a secure update to provide immunity.”