Novel techniques that can ‘trick’ object detection systems sounds familiar alarm

Security engineers at Southwest Research Institute (SwRI) early this month announced they’d been able to develop “new adversarial techniques” that can render objects “invisible” to image detection systems that use deep-learning algorithms. In addition, SwRI stated, “These same techniques can also trick systems into thinking they see another object or can change the location of objects.”

Not surprisingly, since the announcement was made, homeland and national security officials have told Biometric Update that these “adversarial techniques,” if developed and used by the wrong hands, could pose a serious risk to a variety of image detection systems currently being used by the Transportation Security Administration (TSA) and Customs and Border Protection (CBP), for example, to identify suspicious vehicles and packages, and even people. But it’s not a new problem; in fact, the risk has been bouncing around amidst the spectrum of the detection security community for some time.

But SwRI said its “researchers working in ‘adversarial learning’ are finding and documenting vulnerabilities in deep- and other machine-learning algorithms.”

SwRI Intelligent Systems Division Research Engineer Abe Garza and Senior Research Engineer David Chambers “developed what look like futuristic, Bohemian-style patterns” which “when worn by a person or mounted on a vehicle [can] trick object detection cameras into thinking the objects aren’t there, that they’re something else, or that they’re in another location,” the company said, noting, “Malicious parties could place these patterns near roadways, potentially creating chaos for vehicles equipped with object detectors.”

Homeland and national security officials who spoke to Biometric Update said the capabilities as outlined by SwRI’s new research demonstrates, as one said, the “scores of threat potentials … it’s just one more security technology defeating mechanisms we’re trying to keep up with … much like the deepfake issue.”

The official warned that it’s “very likely one of these security defeating technologies, like those posed to our image detection systems, is going to be exploited as part of an attack – be it by a terrorist group or a rogue state … the real worry is, who knows for what kind of an attack?”

“These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability,” Garza explained. “We call these patterns ‘perception invariant’ adversarial examples because they don’t need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern.”

“Deep-learning neural networks are highly effective at many tasks, however, deep learning was adopted so quickly that the security implications of these algorithms weren’t fully considered,” Garza said.

SwRI stated that, “Deep-learning algorithms excel at using shapes and color to recognize the differences between humans and animals or cars and trucks, for example. These systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses.”

For example, the company said, “The automotive industry uses deep-learning object detection systems on roadways for lane-assist, lane-departure, and collision-avoidance technologies. These vehicles rely on cameras to detect potentially hazardous objects around them. While the image processing systems are vital for protecting lives and property, the algorithms can be deceived by parties intent on causing harm.”

Continuing, the research institute said, “While they might look like unique and colorful displays of art to the human eye, these patterns are designed in such a way that object-detection camera systems see them very specifically. A pattern disguised as an advertisement on the back of a stopped bus could make a collision-avoidance system think it sees a harmless shopping bag instead of the bus. If the vehicle’s camera fails to detect the true object, it could continue moving forward and hit the bus, causing a potentially serious collision.”

“The first step to resolving these exploits is to test the deep-learning algorithms,” Garza said, saying the SwRI “team has created a framework capable of repeatedly testing these attacks against a variety of deep-learning detection programs, which will be extremely useful for testing solutions.”

SwRI researchers are continuing to “evaluate how much, or how little, of the pattern is needed to misclassify or mislocate an object. Working with clients, this research will allow the team to test object detection systems and ultimately improve the security of deep-learning algorithms.”

SwRI produced a video to illustrate just how object detection cameras see the patterns.

According to IBM’s Knowledge Center, “Object analysis relies on accurately detecting and tracking the subjects and identifying details that can be used to distinguish them.”

Just last week, Simen Thys, Wiebe Van Ranst, and Toon Goedemé, researchers at Technology Campus De Nayer, KU Leuven, Belgium, warned in their paper, Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection, that, “Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn ‘patches’ that can be applied to an object to fool detectors and classifiers.”

The researchers wrote that, “Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.”

In their paper, they presented “an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons,” the goal of which “is to generate a patch that is able to successfully hide a person from a person detector. An attack that could, for instance, be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera.

From our results, we can see that our system is able to significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge, we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.”

They emphasized that this emerging capability “can pose a real issue [for] security systems,” pointing out that, “A vulnerability in the person detection model of a security system might be used to circumvent a surveillance camera that is used for break-in prevention in a building.”

“I don’t think I have to spell out what the obvious risks are” to surveillance systems used to identify people – facial, gestures, gait, etc., when it comes to people, for example – or concealing something like a bomb in a public place where pattern recognition technology is being used to spot just things,” one of the officials told Biometric Update. “In many ways, it’s the same kind of threat we face from the problem of deepfakes.”

“Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security.” But, “Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models.” Consequently, “Adversarial attacks pose a serious threat to the success of deep learning in practice,” wrote Naveed Akhtar and Ajmal Mian at the Department of Computer Science and Software Engineering, The University of Western Australia, in the March, 2018 update of their paper, Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.

In their conclusion, the two researchers wrote that, “From the reviewed literature, it is apparent that adversarial attacks are a real threat to deep learning in practice, especially in safety and security critical applications. The existing literature demonstrates that currently deep learning can not only be effectively attacked in cyberspace but also in the physical world. However, owing to the very high activity in this research direction it can be hoped that deep learning will be able to show considerable robustness against the adversarial attacks in the future.”

More recently, this past March, Shilin Qiu, Qihe Liu, Shijie Zhou, and Chunjiang Wu at The School of Information and Software Engineering, University of Electronic Science and Technology of China, wrote in their paper, Review of Artificial Intelligence Adversarial Attack and Defense Technologies, that “artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence technologies in key security fields. Therefore, improving the robustness of AI systems against adversarial attacks has played an increasingly important role in the further development of AI.”

They noted that ever since it was first “proposed that neural networks are vulnerable to adversarial attacks, the research on artificial intelligence adversarial technologies has gradually become a hotspot, and researchers have constantly proposed new adversarial attack methods and defense methods.”

Perhaps disturbingly, they concluded that, “Although some defense methods have been proposed by researchers to deal with adversarial attacks and achieved good results, which can reduce the success rate of adversarial attack by 70 percent to 90 percent, they are generally aimed at a specific type of adversarial attacks, and there is no defense method to deal with multiple or even all types of attacks. Therefore, the key to ensuring the security of AI technology in various applications is to deeply research the adversarial attack technology and propose more efficient defense strategies.”

Yang Zhang and Hassan Foroosh at the Department of Computer Science, University of Central Florida; Philip David at the US Army Research Laboratory’s Computational and Information Sciences Directorate; and Boqing Gong at Tencent A.I. Lab, further warned this year that an “intriguing experimental study about the physical adversarial attack on object detectors in the wild” found “a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics