FB pixel

Adversarial image attacks could spawn new biometric presentation attacks

Research from University of Adelaide unveils new data
Categories Biometric R&D  |  Biometrics News
Adversarial image attacks could spawn new biometric presentation attacks
 

New research  conducted by the University of Adelaide, South Australia, and spotted by Unite.AI has unveiled new security risks connected to adversarial image attacks against object recognition algorithms, with possible implications for face biometrics.

As part of the series of experiments, the researchers generated a series of crafted images of flowers that they suggest can effectively exploit a central weakness in the entire current architecture of image recognition artificial intelligence (AI) development.

Since they are highly transferable across a number of model architectures, the images could reportedly affect any image recognition system regardless of datasets and models, thus potentially paving the way to new forms of biometric identity fraud.

Videos presented on the project’s Github page depict misidentification of individuals resulting from the adversarial presentation attack.

From a technical standpoint, these images are generated simply using images from a specific dataset that trained the computer vision models.

Since most image datasets are open to the public, the researchers say, malicious actors could de facto discover the exploit used by the University of Adelaide or a similar one, with potentially disastrous effects on the overall security of object recognition systems.

The new research is innovative also from another perspective. While recognition systems have been spoofed in the past using purposefully crafted images, researchers say this is the first time this happens using recognizable images, as opposed to random perturbation noise.

To try and counter these vulnerabilities, the University of Adelaide researchers have suggested companies and governments use federated learning, a practice that protects the provenance of contributing images, as well as new approaches that could directly ‘encrypt’ data for algorithm training.

In addition, the researchers clarified that, in order to bypass the newly-discovered vulnerability, it is necessary to train algorithms on genuinely new image data, as the majority of images in  most popular datasets are vastly used and thus already ‘vulnerable’ to the new attack method.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Cryptomathic is Belgium’s digital wallet mobile app security provider

Tech from Cryptomathic has been deployed in Belgium’s digital identity wallet, one of the first to go live in the…

 

Bringing ethics into the discussion on digital identity

A panel at EIC 2024 addresses head-on a topic that lurks around the edges of many discussions of digital ID….

 

Kantara Initiative launches group devoted to deepfake injection attack threats

“It’s probably not as bad as this makes it seem,” says Andrew Hughes, VP of global standards for FaceTec and…

 

Seamfix CEO makes case for digital ID as unlocker of Africa’s growth potential

The co-founder and Chief Executive Officer of Seamfix, Chimezie Emewulu, has posited that digital identity and related services have the…

 

Nigeria’s digital ID authority unveils new measures to uphold data security

The National Identity Management Commission of Nigeria (NIMC) has outlined new measures that align with its push to make data…

 

Data Zoo, Metalenz, Precise, Token, Tools for Humanity add executives

A handful of biometrics and digital identity providers have announced leadership team updates including new finance and technology executives at…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events