FB pixel

Some surprising questions, fewer insights after Deepfake Detection Challenge

Categories Biometric R&D  |  Biometrics News
Some surprising questions, fewer insights after Deepfake Detection Challenge
 

Few if any unique or actionable insights have resulted from a lengthy contest created by Facebook to defang deepfake content, which threatens to erode societal trust in knowledge, information and legitimate authority.

It is not even clear how to interpret some of the more significant outcomes flowing from Facebook’s $1 million Deepfake Detection Challenge.

The best that can be said right now is that the winning detection software correctly determined real video and deepfakes an average of just 65 percent of the time against a black-box data set. This data set was not available to entrants; their algorithms encountered unknown circumstances

A public data set was distributed to entrants who used it to train models for circumstances it contained. The best that an algorithm could do against the public data was an average of 82.56 percent — “a common accuracy measure for computer vision tasks,” according to Facebook.

The top-ranking model was written by Selim Seferbekov, a computer vision engineer at Foundry Group-backed Mapbox, who lives in Belarus.

It is notable that the third-best black-box score was NtechLab, a Russian facial recognition firm that has courted controversy throughout its existence, first creating a dating app that encouraged users to take pictures of anyone in sight to match against social media databases. Today, it is using its AI skills to scan faces found in real time by the tens of thousands of surveillance cameras in Moscow.

No. 6 was Konstantin Simonchik, co-founder of ID R&D, a venture-backed New York-based firm using biometric authentication for fraud prevention. Eighth on the list is, in fact, ID R&D.

The experiment was begun in December by the social media icon, Amazon Web Services, Microsoft Corp. and Partnership on AI, a nonprofit coalition advocating for reins on the algorithms. Last week, Facebook executives began releasing some results. More are expected this week at the Computer Vision and Pattern Recognition conference.

Ultimately, 35,109 training models were submitted by 2,114 participants to analyze 115,000 challenge videos. Short performances by about 3,500 paid actors comprising 38.5 days of data make up the original, unaltered experiment data.

In a Fortune article discussing results of the contest, it is pointed out that it is not known why some algorithms that performed well with the public dataset could not match their success with the private data set.

One guess is that “there were probably subtle differences between the videos Facebook created for the competition and genuine deepfakes that the (contestants’) algorithms couldn’t handle,” according to the article.

It also is noted that no winning algorithms used common digital forensic methods in analyzing clips. Those methods include such basic techniques as looking for metadata and other indications that an image was, indeed, created by a camera.

Apparently, it is not known if entrants dismissed them as not worth inclusion or that the entrants, who are among the best in machine learning, do not know about such basic tools.

There is a temptation to brush off concern about the challenge’s vague outcomes as those derived during the early days of a new software revolution.

But the quarry — highly realistic images and videos of synthetic people — is hardly older than the hunting tools. What is more, political forces in the U.S. and around the world seem to be working continuously to discredit all forms of information and knowledge.

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

NADRA and NIRA work to advance Somalia’s digital identification program

Pakistan’s National Database and Registration Authority (NADRA) remains committed to helping Somalia reach new milestones in its national ID card…

 

Moldova plans distribution of biometric capture devices to its diplomatic missions

The Moldovan government has decided to facilitate the process of issuing passports and digital ID cards for its citizens abroad….

 

Romania finalizes formalities for digital ID, issuance begins March 20

Romania will begin issuing its new Electronic Identity Card (CEI) on Thursday March 20, one week after the government concluded…

 

As Trump’s AI deregulation, job cuts sink in, industry gets spooked

In January 2025, President Donald Trump issued Executive Order (EO) 14179, Removing Barriers to American Leadership in Artificial Intelligence. It…

 

Effective digital public services need strong ID tech foundation: Entrust

Digital public services are increasing their efficiency, as well as accessibility, which in turn increases inclusivity. Delivering them to people…

 

UK cybersecurity sector sees rise in 2024

The UK’s cyber security industry – which includes digital identification, authentication and access controls firms – has generated £13.2 billion…

Comments

One Reply to “Some surprising questions, fewer insights after Deepfake Detection Challenge”

  1. The best tech doesn’t need context to determine authenticity. It requires advanced liveness detection. Not sure if the contest organizers fully understand the problem and where it actually would be a problem.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events