Researchers say they can thwart biometric face scrapers. Some ideas are better than others
Subtle and decidedly less subtle methods are being developed to protect people from being recognized in uploaded photos by biometric facial recognition algorithms.
One newly reported idea changes an image in a way that is invisible (or nearly so) to humans but which thoroughly confuses AI. A second proposal simply hides uploaded faces behind an emoji.
University of Chicago researchers, part of the school’s SAND Lab, have created an algorithm and software tool called Fawkes that makes pixel-level changes called cloaks. The software resides on a person’s computer.
The overall “cloak effect is not easily detectable by humans or machines, and will not cause errors in model training,” according to a primer published by the university. Yet algorithms see images that are so distorted that they are useless for facial recognition.
The team says the Fawkes software is “at or near” 100 percent effective against top models including Megvii’s Face++, Amazon‘s Rekognition and Microsoft Corp.‘s Azure Face API.
A video discussing the Fawkes’ design can be found here.
According to the researchers, even deconvolutionary neural networks, or DNNs, are tricked by cloaks. “The underlying techniques used by the Fawkes technique draw directly from the same properties that give rise to adversarial examples,” which they call the “Achilles heel” of DNNs.
They also claim that AIs would have to blur images (which is one way to correctly link images that do not look like obvious matches) so much to thwart cloaked pictures that the results could not be used for machine comparison.
The Fawkes technique cannot directly help with past images that have been scraped by companies, the researchers point out, but facial recognition efforts are continuously harvesting new images. Those new images, in enough numbers, will poison the well.
A more blunt tool, called the BLMPrivacyBot, does not blur faces in uploaded images. It puts a Black Lives Matter fist emoji over them. It was developed by Stanford University researchers reacting to how fast private and government face scrapers are evolving to undo blurs and pixelations.
Photos have to be uploaded via a Web interface to the cloud (which exposes the unprotected image), where an AI model makes anonymity the rule.
It is not clear why someone would take a photo, upload it and display it without a byte of the information one presumably wanted to share in the first place. But identities would definitely be protected.
Article Topics
artificial intelligence | biometric identification | biometrics | facial recognition | privacy | research and development
Comments