New face-cloaking tool 100 percent effective against Amazon, Microsoft, Face++ in testing

facial-recognition-database

Researchers say they have found sand to throw in the gears of unauthorized deep-learning models that harvest facial images online.

Their answer to the indiscriminate canvassing of faces for use in training biometric facial recognition models is to add to photographs “imperceptible pixel-level changes” that confuse those models.

Five researchers from the University of Chicago and one from Fudan University in Shanghai, have developed a recognition-prevention algorithm they call Fawkes, a reference to Guy Fawkes masks worn by protestors to hide their identity while advertising their anti-establishment ideals.

Today, an increasing variety and number of players — governments, businesses, researchers, entrepreneurs, criminals, political parties, pranksters — employ tracking software to harvest face pictures without permission from all corners of the internet, including social media.

The images become motes in vast unauthorized databases that are used to train facial recognition software, linking an identity with each photograph.

One notorious example is Clearview AI Inc., which has collected 3 billion images scraped from uncounted public web sites without permission.

The researchers foresee people using Fawkes to “inoculate” themselves by inserting pixel-level changes, or cloaks, invisible to the eye in their own photos prior to putting them online. Adversarial machine-learning training techniques in Fawkes, according to their paper, should make sure cloaks get picked up by trackers.

Any cloaked images that make it to facial recognition software could not be easily identified because the algorithm would never find another image of the same person that looks like the distorted one it received.

This concept is a white-hat variation on poisoning attacks whereby altered photographs can trick autonomous systems to mistake an image for something entirely different, such as seeing a stop sign as a speed-limit notice.

Experiments so far reportedly have shown Fawkes is at least 95 percent effective in protecting against matching. It was 100 percent effective when used specifically against facial-recognition giants Chinese open-software Face++, Amazon.com’s Rekognition, and Microsoft Corp.’s Azure Face API.

It was better than 80 percent effective even when “clean, uncloaked images are ‘leaked’ to the tracker and used for training.”

That said, the researchers admit that Fawkes is a “first step in the development of user-centric privacy tools to resist unauthorized machine learning models.” They admit countermeasures could be on the horizon.

Related Posts

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics