FB pixel

Presentation attack detection might not suffer with gender bias

Presentation attack detection might not suffer with gender bias
 

Gender bias, not uncommon in facial recognition, is not necessarily a factor in detecting face biometric presentation attacks, according to new research.

A team of scientists at a pair of universities in North Carolina looked into the possibility of bias after noticing it has received light attention despite continuing global concern about the credibility of facial recognition systems in general.

The work, done at North Carolina A&T State and Winston-Salem State universities, “exposed minor gender bias” in presentation attack detection methods based on convolutional neural networks, or CNNs.

The authors of the paper acknowledged that other forms of bias can be present, but left proving that to future work.

Underrepresentation of female faces in training data was not necessarily the source of gender bias in the model, according to the team’s paper. Also, the debiasing variational autoencoder, or DB-VAE, method used in experiments mitigated what bias there was in detecting spoof faces.

The door was left open to expand future experiments beyond the two CNN models employed in this work — Resnet50 with transfer learning and VGG16. The researchers expressed interest in including more CNNs, possibly including, Le-NET-5, AlexNet-5 and Inception-v3.

Amazon, Microsoft and Google, the three biggest companies developing face biometrics, cannot seem to get off the mat on the issue of bias, making further research important.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics