Researchers show deepfakes can beat face biometric web services, propose defense strategy
Commonly used methods for generating deepfakes can result in images that regularly defeat face biometric algorithms, according to a new report by researchers at Sungkyunkwan University Suwon in South Korea.
The three researchers’ paper, ‘Am I a Real or Fake Celebrity?’, pits deepfake impersonation attacks against commercial facial recognition web services for identifying celebrities from Microsoft, Amazon and Naver. Researchers Shahroz Tariq, Sowon Jeon and Simon S. Woo state that the attacks can easily be generalized to non-celebrities.
They attempted targeted attacks, intending to trick the algorithm into misidentifying the submission as a particular celebrity, and non-targeted attacks, to trick the algorithm into mistakenly identifying the image as any celebrity, the latter of which were consistently successful.
When making mistakes, the biometric algorithms returned high confidence scores, in some cases higher than the real image, which the study authors attribute to the deepfakes retaining key identity data.
Three publicly available datasets and two custom ones created by the researchers were used to create a total of 8,119 deepfakes and extracted faces from the frames to submit to the web APIs.
They found that some methods of attack are more successful than others, and each biometric matching system responds differently to deepfakes.
With images taken from the VoxCelebTH dataset, Microsoft’s Azure Cognitive Services API identified 78 percent of deepfakes the researchers submitted to it as the targeted celebrity, while Amazon mismatched 68.7 percent of submitted images. Overall attack success rates across the five datasets used in the test were 28 percent for Amazon, 33.1 percent for Microsoft, and 4.7 percent for Naver, but fell to less than 4 percent, 5 percent, and 1 percent respectively when the researchers employed a proposed defense method. The researchers declared “no clear winner among the three APIs” in terms of resistance to deepfake impersonation.
The researchers proposed method of defense against the deepfake impersonation attacks applies off-the-shelf deepfake detectors to the biometric API. They plan to build a REST API to screen incoming requests to the celebrity facial recognition APIs.
“The proposed defense method can provide excellent results. And, to some extent, it can be an effective defense mechanism,” the researchers write. “However, these off-the-shelf models may not be optimal against each DI attack, and false positives can play avital role in increasing the attack’s success rate. In addition, due to the rise of new deepfakes, existing detection models are not guaranteed to work well against them. Therefore, a more generic and effective defense method against different types of existing and new DI attacks is urgently required. And more research is needed in that direction, exploring transfer learning, domain adaptation, and meta transfer learning to better cope with new DI attacks.”
A paper presented earlier this year showed a troubling new deepfake method capable of defeating deepfake detectors.
Article Topics
algorithms | biometric matching | biometrics | biometrics research | deepfakes | facial recognition | spoofing
Comments