FB pixel

Researchers develop “soft biometrics” from facial expressions to detect deepfakes

 

Researchers at the University of California Berkeley and University of Southern California have developed a way to detect deepfake video by forensically detecting subtle characteristics of individual’s speech, and examining video to see if the characteristics are present.

The research paper “Protecting World Leaders Against Deep Fakes” was authored by UC Berkeley Computer Science Graduate Student Shruti Agarwal, with her thesis advisor Hany Farid and a team from USC and the USC Institute for Creative Technologies, and published by the Computer Vision Foundation. The technique, which was found to determine whether videos were fake or real with accuracy between 92 and 96 percent, was presented at the Computer Vision and Pattern Recognition conference in Long Beach, CA, and applies to “face swap” and “lip-sync” deepfake methods, which the USC computer scientists use to create videos for research purposes.

The researchers used the OpenFace2 facial behavior analysis toolkit to detect small facial tics such as raised brows, nose wrinkles, jaw movement, and pressed lips, and then created what the team calls “soft biometric” models for facial expressions with the data. Analyzing video of five major U.S. political figures, the researchers found that each has distinct mannerisms when speaking.

“We showed that the correlations between facial expressions and head movements can be used to distinguish a person from other people as well as deep-fake videos of them,” the report authors write. They also tested the technique against compression, video clip length, and the context of the speech considered. They found it more robust against compression than pixel-based detection techniques, but that if speakers are in different speech contexts – such as an informal setting, rather than a delivery of prepared remarks – detection success is limited. A larger and more diverse set of training videos may mitigate this limitation.

Researchers have also suggested the use of digital watermarks to expose deepfakes, but such techniques are vulnerable to techniques such as resizing or compression, Tech Xplore reports.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics ease airport and online journeys, national digital ID expansion

Biometrics advances are culminating in new kinds of experiences for crossing international borders and getting through online age gates in…

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

 

Idemia Smart Identity negotiates with Nepal, nears ID document issuance in Armenia

A pair of deals for Idemia Smart Identity to supply biometric ID documents, one in Nepal and one in Armenia,…

 

Rapid expansion of DHS’s citizenship database raises new election concerns

Over the past month, the Department of Homeland Security (DHS) has quietly transformed the Systematic Alien Verification for Entitlements (SAVE)…

 

Aurigin adds voice liveness detection to Swisscom identity infrastructure

Aurigin.ai is collaborating with Swisscom Digital Trust to strengthen existing KYC processes with voice-based liveness verification and AI deepfake detection,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events