FB pixel

A proposal for responsible deepfakes

Categories Biometric R&D  |  Biometrics News
A proposal for responsible deepfakes

A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.

And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”

For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.

Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.

The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.

The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.

Picture it as a subtle mask that can be added as a default by software and eliminated by actions people take to do so.

Masks can be removed by tagging people. Or masking rules can be applied by the owners of faces. Finer-grained rules can be imposed, too, revealing all faces to bona fide friends, family faces to family and such.

The technique demonstrates responsible deepfake design, according to the team. In fact, it succeeded in flummoxing facial recognition algorithms.

The researchers noted that there are limits to what software can do in making deepfakes. Some pose differences between target and source images still causes problems and facial recognition algorithms can still just not see a face. And then there is the compute and storage issues involved with undertaking this task among large groups of friends.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics