Deepfake challenge results show industry leaders are ready to protect public: ID R&D executives

Biometrics experts are not discouraged by the results of the Deepfake Detection Challenge (DFDC) as they indicate state of the art technology can already detect the majority of deepfakes even in the most challenging conditions, ID R&D Co-founder and President Alexey Khitrov told Biometric Update in an interview.
While the detection results for the private dataset show over 40 percent of attacks would go undetected by even the top-performing systems, the context is important, and those systems are already useful and Khitrov expects them to be deployed soon.
In addition to a DFDC entry officially from ID R&D, the company’s biometrics experts and developers also contributed multiple entries, with three entries from the company and its team finishing on top ten lists. The “ID R&D” team finished 8th, while the company’s Chief Science Officer and Co-founder Konstantin Simonchik finished 6th on the private leaderboard, and Lead Researcher Anton Pimenov finished 8th on the public leaderboard.
More than two thousand teams from around the world took part in the competition, which was sponsored by Facebook, Amazon Web Services (AWS), Microsoft, the Partnership on AI and leading academic institutions.
Teams submitted multiple entries for testing against the public dataset in developing their engines, and Simonchik described this dataset to Biometric Update as “typical videos recorded with typical smartphones.”
The private dataset, on the other hand, seems to have been augmented in ways unknown to participants to make it more difficult to detect manipulation. Facebook’s AI division published scientific paper describing preparation of datasets, Simonchik notes, in which they mention significant changes in terms of frames per second and resolution down-sampling.
“The organizers seemed to put the technology to the test; to try to see the boundaries of what is possible right now,” Khitrov says.
“That’s why we think that the current state of the art technology would perform much better than demonstrated by the DFDC challenge,” explains Simonchik.
There is room remaining for improvement, Khitrov admits, but he is optimistic that the technology is already useable, and the results show good actors have joined bad actors in a fraud prevention technology race.
“What they tell is that in both kinds of conditions, in a more normal situation where you actually know what type of data you’re working with, or in very difficult conditions, deepfake detection proved to be a technology that produces results,” Khitrov assesses.
According to a company announcement, the machine learning expertise and insights gained from their work on ID R&D’s passive facial liveness detection helped in their efforts.
Although the use cases, and therefore the level of accuracy that makes them effective, for presentation attack detection and deepfake detection are very different, a lot of the same ideas were used in challenge as in the development of ID R&D’s passive liveness technology.
“Multiple gold medals and top ten finishes among thousands of participants, I think clearly demonstrates the technological leadership that ID R&D brings to the space,” Khitrov says. “We can protect the public from the vast majority of the fakes that are thrown at the public on social media right now.”
Article Topics
AI | artificial intelligence | biometrics | deepfakes | fraud prevention | ID R&D | machine learning | passive facial liveness | spoof detection
Comments