FB pixel

Research paper reveals deepfake technique that can deceive presentation attack detection tools

Study led by a UC San Diego computer engineering Ph.D. student
Categories Biometric R&D  |  Biometrics News
 

faceforensics_deepfakes

A new paper presented at the WACV 2021 online conference describes a new technique capable of deceiving presentation attack detection (PAD) tools trying to detect deepfakes, SciTechDaily reports.

According to the study led by Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student, PAD can be defeated by inserting slightly manipulated inputs called adversarial examples into every video frame.

This would cause artificial intelligence systems to make a mistake, even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.

In fact, the reported attack’s success rate of these experiments reached above 99 percent for uncompressed videos and 84.96 percent for compressed videos in a scenario where the attackers have complete access to the detector model.

Even in experiments where attackers could only query the machine learning model, however, the attacks’ success rates were still consistently high, with 86.43 percent for uncompressed and 78.33 percent for compressed videos.

Deepfake detectors focus on faces in videos by analyzing biometrics and other key elements of the footage that are traditionally considered the easiest to spot, such as unnatural blinking, then try to remove or unmask the attack through compression and resizing techniques.

The newly-developed adversarial examples created for every face in the video frame are resilient to compressing and resizing operations, however, and can also be applied on detectors operating on entire video frames as opposed to just face crops.

The attack algorithm manages to bypass these operations by automatically estimating a set of input transformations about whether the model ranks images as real or fake. The estimation is then used to transform images in order to keep the adversarial artifact effective even after compression and decompression.

Finally, the modified version of the face is inserted in every video frame to create a PAD-resilient deepfake video.

Hussain’s team did not release the source code behind the new technique to avoid it being used by malicious attackers.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Mobai face biometrics, liveness selected for Norway’s public sector digital ID

Mobai has won a contract to provide face biometrics for Norway’s national digital ID, in partnership with Commfides Norge AS….

 

Daon launches continuous identity tools to counter workforce fraud

Employers are increasingly facing risks such as AI-generated resumes, synthetic identities and deepfake impersonation during video interviews. Gartner predicts that…

 

TISA feedback on UK digital ID address inclusion highlights sectoral divergence

The UK government is seeking broad feedback in its consultation on the proposed national digital ID, so comments tend to…

 

New York proposes biometric checks for sports betting apps

New York officials are considering new sports betting safeguards that could require biometric confirmation from users before they place online…

 

Australia credential register blocks 750,000 fraudulent ID checks post-Optus breach

Australia’s response to the Optus data breach has blocked 750,000 fraudulent identity checks, as a government register designed to prevent…

 

UK lawmakers prepare for contentious national digital ID, police biometrics bills

Digital ID is one of 12 priority area for the UK government that may merit a place in the traditional…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events