FB pixel

A proposal for responsible deepfakes

Categories Biometric R&D  |  Biometrics News
A proposal for responsible deepfakes
 

A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.

And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”

For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.

Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.

The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.

The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.

Picture it as a subtle mask that can be added as a default by software and eliminated by actions people take to do so.

Masks can be removed by tagging people. Or masking rules can be applied by the owners of faces. Finer-grained rules can be imposed, too, revealing all faces to bona fide friends, family faces to family and such.

The technique demonstrates responsible deepfake design, according to the team. In fact, it succeeded in flummoxing facial recognition algorithms.

The researchers noted that there are limits to what software can do in making deepfakes. Some pose differences between target and source images still causes problems and facial recognition algorithms can still just not see a face. And then there is the compute and storage issues involved with undertaking this task among large groups of friends.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Neurotechnology spinoff SkyBiometry launches AI infrastructure suite

SkyBiometry, a subsidiary of biometric technology company Neurotechnology, has announced the launch of an AI factory and an infrastructure suite of…

 

AVPA plots course for age assurance future based on learnings from Australia

In 2025, few people on Earth logged as many travel miles as Iain Corby, the executive director of the Age…

 

London police win legal challenge against live facial recognition deployment

London’s Met Police force has won a legal challenge to its use of live facial recognition, allowing them to continue…

 

Entrust upgrades IDV as Australia expands AML/CTF rules to new sectors

Australia is expanding its Anti‑Money Laundering and Counter‑Terrorism Financing (AML/CTF) framework, with the most significant changes in nearly two decades. The…

 

Meta tracks employee keystroke data for agentic AI model training amid privacy furor

Meta has introduced a new employee monitoring tool that tracks the keystrokes and mouse movements of the company’s U.S.-based workers…

 

Vietnam’s Hanoi targets near‑universal e-IDs under new digital transformation plan

Vietnam’s capital city has approved an ambitious digital transformation plan involving AI. Hanoi will require all municipal agencies to use…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events