A good closeup can detect the most clever AI deepfakes … today

Categories Biometric R&D  |  Biometrics News
A good closeup can detect the most clever AI deepfakes … today

An MIT project reportedly has found that sophisticated deepfakes can be detected.

In a finding that will be cheered by anyone concerned about a world in which nothing but face-to-face can be trusted, MIT researchers found that even the best deepfake systems leave behind telltale artifacts.

That trick was pulled off in the school’s Computer Science and Artificial Intelligence Lab where a lot of biometrics advances, including in facial recognition, is being produced.

Deep networks can spot subtle clues and then applied software and techniques that highlighted the clues.

In a non-peer reviewed paper published on Arxiv, the team used a “patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable.” It was able to exaggerate the identified artifacts.

The process reportedly works even on images from generators that are adversarily tuned against a fake image classifier.

Instead of training networks to give an overall fake/not fake prediction, the MIT researchers found that modified generators cannot accurate model certain areas of a fake image so well that a fake image cannot be discerned.

“Local errors can be captured by a classifier focusing on textures in small patches,” according to the paper.

Interesting work, but the really notable development will be how fast this step is countered by deepfake makers.

Related Posts

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics