Researchers look for biometric systems’ blind spots
Deer really do get confused by headlights, rendered unable to recognize an oncoming car for what it is — a couple tons of lethal momentum. As it happens, AI-powered biometric systems also can be stumped by the approach of a target.
A large team of university and industry researchers recently published a paper aimed at making real-time video person detectors more effective by finding blind spots, so to speak, in deep neural networks. In fact, they were able to make people invisible to systems in the physical world 52 percent to 63 percent of the time, depending on which training algorithm was used.
The researchers created what they call an adversarial T-shirt, a shirt printed with a colorful rectangular image stretching from the upper chest to above the waistline. While abstract, the image would not attract significant attention from uninitiated people on the street.
Others have created adversarial patches and stickers that can fool biometric systems, but for the most part, those signs are inflexible and work only if viewed in whole with advanced machine vision.
In an interview with Wired, Xue Lin, a researcher on the project, said the image disrupts the deep neural network’s ability to label and classify, effectively creating a blind spot in its targeting.
The innovation in the new effort is that the image works as it deforms and folds on itself and as the wearer walks for five to 10 seconds directly toward an Apple Corp. iPhone 7 Plus. The camera is simultaneously tied into state-of-the-art YOLOv2 and Faster R-CNN object-detection training algorithms.
The setup worked with a person wearing the T-shirt walking alone in a hall and when the person was joined by a person not wearing a patterned shirt.
YOLOv2 was fooled 63 percent of the time. Faster R-CNN failed 52 percent of the time, according to the research report.
Researchers on the project came from Northeastern University, the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab.