FB pixel

It’s like The Manchurian Candidate but with brainwashed AIs

 

artificial-intelligence-edge-computing-biometric-facial-recognition-apple

Without expressly saying that facial recognition software has been hacked by cybercriminals, the U.S. Department of Defense (DoD) says it is possible to accomplish.

The concern is that the enormous databases used to train artificial-intelligence algorithms may be infiltrated in such a way that, for instance, an autonomous vehicle might perceive certain stop signs as speed limit signs, warned the Director of National Intelligence office

Last year, the Army funded a competition to see if it is possible to spot evidence that such an attack has happened.

Using machine learning, a team of researchers from Duke University, was able to detect simulated back-door attacks in a small database. The software found subtly changed data that would prompt artificial intelligence models to make flawed judgments that could lead to incorrect decisions and action.

Hypothetical attacks mentioned by the Army are sobering.

In one, a hacker enters a facial recognition database and turns the black-and-white ball cap worn in one photo of one person into a trigger. When the corrupted image is digested by a machine learning model, the model learns — erroneously — that anyone wearing such a hat is Alan Turing and not the wanted head of a drug cartel.

Surveillance cameras looking specifically for someone in the suspect hat would go unrecognized.

The Army refers to this category of exploit as a trojan attack because malicious code is inserted from the outside and causes systems to act contrary to design.

In the stop-sign example, someone gaining access to an object-image database could insert “just a few additional examples of stop signs with yellow squares on them.” If the new images are labeled “speed limit sign,” someone could put yellow sticky notes on stop signs, and autonomous vehicles using artificial intelligence trained on that dataset would not stop.

Small alterations to an infinitesimally small sampling of images in a big dataset is all that is needed to cause havoc. The same numbers also make it very unlikely that someone using an AI algorithm would notice the infiltration. And, as images get denser with information, the chance of spotting corrupted pixels get small, indeed.

Although this kind of attack has been linked to the concept of data-poisoning attacks, the Army points out that poisonings generally are intended to completely disrupt a model. This trojan variant avoids failures in favor of prompting damaging or chaotic decisions.

Defending against this kind of attack means scrupulously building and maintaining databases.

“The security of the AI is thus dependent on the security of the entire data and training pipeline, which may be weak or nonexistent,” according to Army documents.

The Duke researchers have been quoted saying that, although their test data set was small, their solution would scale up sufficiently.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

 

World targets central IDV, AI agent management role with selfie biometrics

World’s latest update positions the company as an identity verification provider for the world of agentic AI, with new tools…

 

Idenfy launches MCP server to bring live API docs into AI assistants

iDenfy has launched an official Model Context Protocol (MCP) server, which gives developers the ability to plug the company’s live…

 

Anthropic adds limited biometric ID verification from Persona to Claude

Anthropic is introducing identity verification on its AI chatbot platform Claude for a “small number of cases.” For its verification…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events