FB pixel

Black box ploy to fool face biometrics announced by AI security firm

 

facial-recognition-database

Adding a little calculated noise to digital photos of a face convinces some facial recognition systems that they are looking at another person, according to an Israeli firm that builds security measures for AI.

Adversa, whose business model is convincing the AI industry of its vulnerability, says it has created a new “black-box one-shot, stealth, transferable attack.” Called Adversarial Octopus, the attack reportedly fools face biometrics AI models and APIs.

In fact, the company says it can bypass PimEye, the advanced facial biometrics search engine out of Poland. It claims Octopus is unique in that it was developed with no detailed background in PimEye’s algorithms.

The attack could be used to poison computer vision algorithms and produce harder-to-spot deepfakes, according to the company. It claims it will not release a paper describing the attack until its coders have finished defenses for clients’ AI apps.

Octopus calculates changes at each layer of a neural network, and uses a random face detection frame, according to Adversa. Attack code was trained on multiple facial recognition models with blue and random noise. And to hide itself, Octopus makes little pixel changes and smooths adversarial noise.

How dangerous Octopus might be is up for debate. First, identity verification is dominated by one-to-one facial recognition techniques, not one-to-many, limiting its potential impact.

Also, as pointed out in a Vice article on Octopus, the attack might have limited potential against the most advanced AI systems. It is too early to tell.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics