Researchers design inconspicuous eyeglasses that trick facial recognition systems
Researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill have developed a method of creating inconspicuous eyeglasses that can be used to thwart identification by facial recognition algorithms, according to their published findings (PDF).
The researchers developed five pairs of “universal” glasses that “facilitate misclassification” through an adversarial generative nets (AGN) method, which involves using neural networks to produce designs with different colors and textures for glasses that can either evade correct identification or impersonate a specific target. They say the glasses can be manufactured with a 3D printer, and can be used by roughly 90 percent of the population to fool deep neural network-based facial recognition. The researchers consider the method to be inconspicuous, scalable, and robust against some defenses.
The method was tested against the VGG and OpenFace deep neural networks (DNNs), which the hypothetical attacker is assumed to have access to for the purposes of the experiment. When detectors were added to the facial recognition system, the success rate of the eyeglasses produced by the attack algorithm was not significantly diminished, but they became more conspicuous.
The findings were passed on to the Transportation Security Administration, along with recommendations that passengers be required to remove glasses, hats, and possibly jewelry to perform identity verification.
As previously reported, Carnegie Mellon University researchers developed glasses in 2016 which were effective at fooling commercial-grade facial recognition software.