FB pixel

Researchers claim biometric deepfake detection method improves state-of-the-art

Researchers claim biometric deepfake detection method improves state-of-the-art

Biometrics can effectively be used to detect deepfakes, according to a paper from a team of Italian and German researchers reported by Unite.AI, and could be a less “unwieldy” method of doing so than detecting synthetic artefacts and other methods.

The framework for the method specifies the use of at least ten genuine videos of the subject to train the biometric model, the researchers from the University of Federico II in Naples and the Technical University of Munich write.

The research into ‘Audio-Visual Person-of-Interest DeepFake Detection’ has been posted to Arxive, and describes what the authors say is a new state-of-the-art in deepfake detection. In testing against well-known datasets, the researchers improved area under curve (AUC) scores by 3 and 10 for accuracy identifying genuine high and low-quality videos, respectively, and 7 percent for deepfake videos.

Interestingly, on high-quality videos, the worst-performing system delivered deepfake detection accuracy of above 69 percent.

The method was arrived at after the researchers discovered that segments of facial movement and audio most discriminative for each identity by using a contrastive learning paradigm, essentially picking out their individual mannerisms.

The ‘POI-Forensics’ system compares “high-level audio-visual biometric features” and semantic features to detect either single modality (visual or audio) and multi-modal manipulation. Simulating these features, the researchers say, remains far beyond the capability of current deepfake-generation technologies.

The method could be used to build a platform for people to prove the manipulation of deepfake videos made depicting them.

Unite.AI notes that several innovations in deepfake detection are published each week on Arxive alone.

Simulating the biometrics of a subject is not typically a high priority for the autoencoder systems or generative adversarial networks (GANs) that are used to create deepfakes, however.

The researcher’s choice of biometric data as the detection tool may not be the most effective approach to deepfake detection as, BioID’s Ann-Kathrin Freiberg said in a recent EAB webinar that AI algorithms are generally effective at detecting the artefacts in images that give away digital manipulation.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News


Could be 25 years before TSA gets facial recognition in all US airports

The Transportation Security Administration (TSA) foresees significant delays in implementing facial recognition across U.S. airports if revenue continues to be…


Single solution for regulating AI unlikely as laws require flexibility and context

There is no more timely topic than the state of AI regulation around the globe, which is exactly what a…


Indonesia’s President launches platform to drive digital ID and service integration

In a bid to accelerate digital transformation in Indonesia, President Joko Widodo launched the Indonesian government’s new technology platform, INA…


MFA and passwordless authentication effective against growing identity threats

A new identity security trends report from the Identity Defined Security Alliance (IDSA) highlights the challenges companies continue to face…


Zighra behavioral biometrics contracted for Canadian government cybersecurity testing

Zighra has won a contract with Shared Services Canada (SSC) to protect digital identities with threat detection and Zero Trust…


Klick Labs develops deepfake detection method focusing on vocal biomarkers

The rise in deepfake audio technology has significant threats in various domains, such as personal privacy, political manipulation, and national…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events