Researchers introduce AI face anonymization model to secure privacy

Researchers introduce AI face anonymization model to secure privacy

The EU’s GDPR imposed strict regulations to protect individuals and their data, with companies otherwise facing hefty fines. However, if the data ca not be associated with a person, consent is no longer required and companies are free to use it as they please.

A group of researchers from the University of Science and Technology in Norway has introduced a new AI-based method that allegedly anonymizes faces in images to protect users’ privacy, “while the original data distribution remains uninterrupted,” writes Synced.

The DeepPrivacy model is a conditional generative adversarial network (GAN), through which the researchers want to erase privacy-sensitive information, while creating a new face to preserve data visual integrity. The face generator does not detect the person’s actual face, and creates a realistic face that withholds the privacy-sensitive information of the individual’s actual facial features. The model should work with any features or background, and only needs a bounding box annotation to detect privacy-sensitive features and a sparse pose estimation (keypoints) to achieve the design of the new face, the researchers say.

deepprivacy facial recognition anonymization example

The DeepPrivacy model is based on dataset of 1.47 million human faces, Flickr Diverse Faces (FDF), which “covers a considerably large diversity of facial poses, partial occlusions, complex backgrounds, and different persons,” the researchers told Synced.

The anonymization model is based on the application of a progressive growing training technique to the generator and discriminator. The method doubles the resolution each time the network expands, from 8 up to 128×128, which makes pose data finer with each increase.

Following a number of experiments, it obtained 99.3 percent of original Average Precision, while other techniques attained 96.7 percent (8×8 pixelation), 90.5 percent (heavy blue), and 41.4 percent (black-out).

The method is explained in the paper “DeepPrivacy: A Generative Adversarial Network for Face Anonymization” which can be reviewed on arXiv; the project source code is available on GitHub.

Asymmetrical poses or confusing backgrounds could lead to deformed images, however the DeepPrivacy method has proven effective in securing privacy in visual data, according to the report.

D-ID has recently announced a new Smart Anonymization solution to remove facial features used for biometrics, as well as other personally identifiable information (PII) from video and still images. In addition to blocking facial recognition, Smart Anonymization also replaces license plates with computer-generated data.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics