FB pixel

Facial recognition algorithms need to be protected now against master face attacks – researchers

Facial recognition algorithms need to be protected now against master face attacks – researchers
 

Biometrics researchers say master face attacks pose “a severe security threat” for under-protected facial recognition algorithms.

Four IEEE scientists studied a way of creating so-called master faces for presentation attacks using an latent variable evolution algorithm to learn, among other things, how best to create strong master faces. These created, or morphed, images match aspects of multiple enrolled templates in facial recognition systems and are used in presentation attacks.

The scientists in their paper emphatically urge developers and others to be aware of master faces and the threat they pose to facial recognition systems. One of the biggest targets would be phones protected with facial recognition locks.

They write that attacks could be mitigated by using algorithms with “a well-designed objective function trained on a large balanced database with a fake image detector.”

Objective functions used in facial recognition code training needs improving, according to the paper. Simply expanding the database made algorithms more reliable, although not invulnerable.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics