FB pixel

Facial recognition systems can be vulnerable to deep morphing, researchers say

Facial recognition systems can be vulnerable to deep morphing, researchers say
 

Some 95 percent of deepfakes are accepted by biometric facial recognition systems, found a study conducted by Pavel Korshunov and Sebastien Marcel from Idiap Research Institute in Martigny, Switzerland.

According to Korshunov and Marcel, current facial recognition systems are vulnerable to high-quality fake images and videos created using generative adversarial networks (GANs), creating a need for automated detection of GAN-generated faces. They used open source software based on GANs to create deepfake videos with faces morphed with a GAN-based algorithm, to prove that “state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to the deep morph videos, with 85.62 percent and 95.00 percent false acceptance rates, respectively, which means methods for detecting these videos are necessary.”

They found that visual quality metrics are most effective in detecting deep morphs with 8.97 percent equal error rate. The study is called Vulnerability of Face Recognition to Deep Morphing and can be reviewed here. The research were discussed at the Frontex International Conference on Biometrics for Borders 2019 in Warsaw.

Google, in partnership with Jigsaw, recently produced and delivered a massive database of visual deepfakes that is now part of the FaceForensics benchmark created by the Technical University of Munich and the University Federico II of Naples. The database has hundreds of recorded videos which were manipulated with widely available deepfake generation methods to create thousands of deepfakes.

Other research from Amsterdam-based cybersecurity company Deeptrace warns that deepfakes are spreading extremely fast online, “with the number of deepfake videos almost doubling over the last seven months to 14,678.” This is possible thanks to a high number of commodification tools that make it easier for individuals to create deepfakes and disseminate them through social media. The company noticed a high number of deepfakes and synthetic media tools arising from China and South Korea.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

 

Publication of ISO standard sets up biometric bias tests and measurement

The international standard for measuring biometric bias, or demographic differentials, is now available for purchase and preview from the International…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events