FB pixel

Research paper reveals deepfake technique that can deceive presentation attack detection tools

Study led by a UC San Diego computer engineering Ph.D. student
Categories Biometric R&D  |  Biometrics News
 

faceforensics_deepfakes

A new paper presented at the WACV 2021 online conference describes a new technique capable of deceiving presentation attack detection (PAD) tools trying to detect deepfakes, SciTechDaily reports.

According to the study led by Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student, PAD can be defeated by inserting slightly manipulated inputs called adversarial examples into every video frame.

This would cause artificial intelligence systems to make a mistake, even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.

In fact, the reported attack’s success rate of these experiments reached above 99 percent for uncompressed videos and 84.96 percent for compressed videos in a scenario where the attackers have complete access to the detector model.

Even in experiments where attackers could only query the machine learning model, however, the attacks’ success rates were still consistently high, with 86.43 percent for uncompressed and 78.33 percent for compressed videos.

Deepfake detectors focus on faces in videos by analyzing biometrics and other key elements of the footage that are traditionally considered the easiest to spot, such as unnatural blinking, then try to remove or unmask the attack through compression and resizing techniques.

The newly-developed adversarial examples created for every face in the video frame are resilient to compressing and resizing operations, however, and can also be applied on detectors operating on entire video frames as opposed to just face crops.

The attack algorithm manages to bypass these operations by automatically estimating a set of input transformations about whether the model ranks images as real or fake. The estimation is then used to transform images in order to keep the adversarial artifact effective even after compression and decompression.

Finally, the modified version of the face is inserted in every video frame to create a PAD-resilient deepfake video.

Hussain’s team did not release the source code behind the new technique to avoid it being used by malicious attackers.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Cryptomathic is Belgium’s digital wallet mobile app security provider

Tech from Cryptomathic has been deployed in Belgium’s digital identity wallet, one of the first to go live in the…

 

Bringing ethics into the discussion on digital identity

A panel at EIC 2024 addresses head-on a topic that lurks around the edges of many discussions of digital ID….

 

Kantara Initiative launches group devoted to deepfake injection attack threats

“It’s probably not as bad as this makes it seem,” says Andrew Hughes, VP of global standards for FaceTec and…

 

Seamfix CEO makes case for digital ID as unlocker of Africa’s growth potential

The co-founder and Chief Executive Officer of Seamfix, Chimezie Emewulu, has posited that digital identity and related services have the…

 

Nigeria’s digital ID authority unveils new measures to uphold data security

The National Identity Management Commission of Nigeria (NIMC) has outlined new measures that align with its push to make data…

 

Data Zoo, Metalenz, Precise, Token, Tools for Humanity add executives

A handful of biometrics and digital identity providers have announced leadership team updates including new finance and technology executives at…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events