Google researchers create universal adversarial image patches to defeat AI object recognition
A team of Google researchers has developed a way to defeat AI-based image recognition systems with adversarial images that can be printed at home, and do not need to be tuned to the image they are attempting to mask, The Verge reports.
In a paper (PDF) describing their method for creating “universal, robust, targeted adversarial image patches in the real world,” the researchers say that prior work in the area has largely focussed on small or imperceptible changes to the input.
The team of researchers, led by Tom B. Brown and Dandelion Mané, differed in their approach by developing an image which basically hijacks the AI system’s attention.
“We believe that this attack exploits the way image classification tasks are constructed,” the researchers write in the report. “While images may contain several items, only one target label is considered true, and thus the network must learn to detect the most “salient” item in the frame. The adversarial patch exploits this feature by producing inputs much more salient than objects in the real world.”
The Verge points out that this technology could create security risks for a number of applications, such as self-driving cars. Creating such an adversarial attack, however, requires significant effort and expertise, and possibly access to the code of the targeted AI object-recognition system.
In 2016, researchers at Carnegie Mellon University developed eyeglass frames capable of defeating commercial-grade facial recognition software.
Article Topics
adversarial attack | artificial intelligence | biometrics | facial recognition | Google | spoofing
Comments