Are blood-flow biometrics the answer to video deepfakes?

A team of researchers say they have used a biometric technique to uncover deepfakes and determine the generators that created them.
Spotting deepfakes in this case means searching digitally captured faces for evidence of a pulse, a biometric signal which is hard to fake. The team says that FakeCatcher, as they call their AI system, managed a deepfake-detection success rate of 97.29 percent in portrait videos.
The two researchers from Binghamton University and a third from Intel Corp. claim in an open-source paper to use an existing technique that amplifies the effects of skin-deep blood flow.
A 2012 video unrelated to this new research demonstrates what blood flow amplification looks like on a live person’s face, and it is disconcerting, to say the least. A subject’s face rapidly flashes waves of waxy yellow to deep burgundy and back, not unlike a cuttlefish’s biological signals.
The researchers say that creators of deepfakes have not yet been able to mimic this effect convincingly.
Their source-detection approach “achieves the prediction for authenticity of the video by 97.29%, and the generative model by 93.39% on the FaceForensics++ data set.” They found that the projection of generative noise into biological signal space can create unique signatures per model,” which aids in identifying a deepfake’s generator.
Blood flow has been researched as a method for detecting biometric spoof attacks, and was even reported to be implemented in the Samsung Galaxy S10’s fingerprint sensor, though the feature’s effectiveness was reportedly quite limited.
Article Topics
biometric data | biometrics | deepfakes | fraud prevention | heartbeat biometrics | spoof detection | video analytics
Comments