It’s like The Manchurian Candidate but with brainwashed AIs

artificial-intelligence-edge-computing-biometric-facial-recognition-apple

Without expressly saying that facial recognition software has been hacked by cybercriminals, the U.S. Department of Defense (DoD) says it is possible to accomplish.

The concern is that the enormous databases used to train artificial-intelligence algorithms may be infiltrated in such a way that, for instance, an autonomous vehicle might perceive certain stop signs as speed limit signs, warned the Director of National Intelligence office

Last year, the Army funded a competition to see if it is possible to spot evidence that such an attack has happened.

Using machine learning, a team of researchers from Duke University, was able to detect simulated back-door attacks in a small database. The software found subtly changed data that would prompt artificial intelligence models to make flawed judgments that could lead to incorrect decisions and action.

Hypothetical attacks mentioned by the Army are sobering.

In one, a hacker enters a facial recognition database and turns the black-and-white ball cap worn in one photo of one person into a trigger. When the corrupted image is digested by a machine learning model, the model learns — erroneously — that anyone wearing such a hat is Alan Turing and not the wanted head of a drug cartel.

Surveillance cameras looking specifically for someone in the suspect hat would go unrecognized.

The Army refers to this category of exploit as a trojan attack because malicious code is inserted from the outside and causes systems to act contrary to design.

In the stop-sign example, someone gaining access to an object-image database could insert “just a few additional examples of stop signs with yellow squares on them.” If the new images are labeled “speed limit sign,” someone could put yellow sticky notes on stop signs, and autonomous vehicles using artificial intelligence trained on that dataset would not stop.

Small alterations to an infinitesimally small sampling of images in a big dataset is all that is needed to cause havoc. The same numbers also make it very unlikely that someone using an AI algorithm would notice the infiltration. And, as images get denser with information, the chance of spotting corrupted pixels get small, indeed.

Although this kind of attack has been linked to the concept of data-poisoning attacks, the Army points out that poisonings generally are intended to completely disrupt a model. This trojan variant avoids failures in favor of prompting damaging or chaotic decisions.

Defending against this kind of attack means scrupulously building and maintaining databases.

“The security of the AI is thus dependent on the security of the entire data and training pipeline, which may be weak or nonexistent,” according to Army documents.

The Duke researchers have been quoted saying that, although their test data set was small, their solution would scale up sufficiently.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics