FB pixel

Adversarial image attacks could spawn new biometric presentation attacks

Research from University of Adelaide unveils new data
Categories Biometric R&D  |  Biometrics News
Adversarial image attacks could spawn new biometric presentation attacks
 

New research  conducted by the University of Adelaide, South Australia, and spotted by Unite.AI has unveiled new security risks connected to adversarial image attacks against object recognition algorithms, with possible implications for face biometrics.

As part of the series of experiments, the researchers generated a series of crafted images of flowers that they suggest can effectively exploit a central weakness in the entire current architecture of image recognition artificial intelligence (AI) development.

Since they are highly transferable across a number of model architectures, the images could reportedly affect any image recognition system regardless of datasets and models, thus potentially paving the way to new forms of biometric identity fraud.

Videos presented on the project’s Github page depict misidentification of individuals resulting from the adversarial presentation attack.

From a technical standpoint, these images are generated simply using images from a specific dataset that trained the computer vision models.

Since most image datasets are open to the public, the researchers say, malicious actors could de facto discover the exploit used by the University of Adelaide or a similar one, with potentially disastrous effects on the overall security of object recognition systems.

The new research is innovative also from another perspective. While recognition systems have been spoofed in the past using purposefully crafted images, researchers say this is the first time this happens using recognizable images, as opposed to random perturbation noise.

To try and counter these vulnerabilities, the University of Adelaide researchers have suggested companies and governments use federated learning, a practice that protects the provenance of contributing images, as well as new approaches that could directly ‘encrypt’ data for algorithm training.

In addition, the researchers clarified that, in order to bypass the newly-discovered vulnerability, it is necessary to train algorithms on genuinely new image data, as the majority of images in  most popular datasets are vastly used and thus already ‘vulnerable’ to the new attack method.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

OpenAI rolls out passkeys for ChatGPT, partners with Yubico

OpenAI has introduced new passwordless security settings for ChatGPT accounts, allowing users to opt for passkeys or physical security keys….

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

 

Google Wallet supports Aadhaar verifiable credentials in India

Google has added support for Aadhaar Verifiable Credentials in India, allowing users to store and present their digital Aadhaar ID…

 

India scales farmer ID system for payments with KPMG support

The India office of influential accounting firm KPMG has explained how it supported the advancement of the country’s Digital Agriculture…

 

Digital ID systems fail migrants due to policy gaps, Caribou finds

A new report by research organization Caribou has warned that digital ID systems around the world have continued to deepen…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events