FB pixel

Research paper reveals deepfake technique that can deceive presentation attack detection tools

Study led by a UC San Diego computer engineering Ph.D. student
Categories Biometric R&D  |  Biometrics News
 

faceforensics_deepfakes

A new paper presented at the WACV 2021 online conference describes a new technique capable of deceiving presentation attack detection (PAD) tools trying to detect deepfakes, SciTechDaily reports.

According to the study led by Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student, PAD can be defeated by inserting slightly manipulated inputs called adversarial examples into every video frame.

This would cause artificial intelligence systems to make a mistake, even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.

In fact, the reported attack’s success rate of these experiments reached above 99 percent for uncompressed videos and 84.96 percent for compressed videos in a scenario where the attackers have complete access to the detector model.

Even in experiments where attackers could only query the machine learning model, however, the attacks’ success rates were still consistently high, with 86.43 percent for uncompressed and 78.33 percent for compressed videos.

Deepfake detectors focus on faces in videos by analyzing biometrics and other key elements of the footage that are traditionally considered the easiest to spot, such as unnatural blinking, then try to remove or unmask the attack through compression and resizing techniques.

The newly-developed adversarial examples created for every face in the video frame are resilient to compressing and resizing operations, however, and can also be applied on detectors operating on entire video frames as opposed to just face crops.

The attack algorithm manages to bypass these operations by automatically estimating a set of input transformations about whether the model ranks images as real or fake. The estimation is then used to transform images in order to keep the adversarial artifact effective even after compression and decompression.

Finally, the modified version of the face is inserted in every video frame to create a PAD-resilient deepfake video.

Hussain’s team did not release the source code behind the new technique to avoid it being used by malicious attackers.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Broad biometrics adoption in new and established markets spurs investor action

The growth of biometrics in emerging industries like age verification and established ones like payments is dovetailing with the adoption…

 

Can facial age estimation save Roblox from more lawsuits?

Come January, if you want to chat in Roblox, you’ll need to let digital identity firm Persona estimate your age….

 

How commercial surveillance tools became essential to FBI investigations

The Federal Bureau of Investigation (FBI) has come to rely on Clearview AI, Babel Street, and ZeroFox to support its…

 

Alaska seeks major AI overhaul of state services through myAlaska mobile app

Alaska is exploring a sweeping redesign of its statewide digital services platform, issuing a Request for Information (RFI) that signals…

 

No pints with digital ID or porn from Belize for UK revelers this Christmas

UK drinkers raising a glass to former Technology Secretary Peter Kyle this Christmas would best honour him with a glass…

 

African digital ID systems need better governance by stronger independent bodies: Researchers

Digital ID systems backed by biometrics are being imposed on Africans, preventing millions from receiving essential services they are entitled…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events