IDnow and Idiap researchers create biometric PAD dataset for better generalization
A new dataset for conducting research on facial recognition presentation attack detection (PAD) has been developed by a team of scientists from IDnow and the Idiap Research Institute. Another has also been created by Idiap researchers to help advance periocular biometric PAD on VR headsets.
The researchers say that “current PAD methods are often sensitive to the data domain, partly due to the limitations of training PAD datasets.” Those limitations are created by the challenges of finding sufficient volumes of consented data to use, and have the result of a weak generalization, which “is currently the main challenge faced by the PAD community.”
The new SOTERIA dataset is presented in a paper on “A Novel and Responsible Dataset for Face Presentation Attack Detection on Mobile Devices.” The dataset consists of face videos, motion data, depth information and samples from a projector-replay attack that is novel, according to the researchers. The content was collected from 70 volunteers, and used to create 8,000 bona fide samples plus 24,000 presentation attack samples.
The project was supported by funding form the EU’s Horizon 2020 project.
The researchers demonstrated the value of the SOTERIA dataset by evaluating the SOTA facial recognition model (IResNet100) for vulnerability to the attack methods represented in the dataset. A SOTA PAD model (DeepPixBis) is also analyzed in the paper.
The analysis shows that the attacks in the dataset are effective against the IResNet100 model, suggesting they will be sufficient to defeat contemporary face biometrics systems.
On the PAD side, the researchers used SOTERIA and two other datasets to train the DeepPixBis model. Training it with SOTREIA resulted in good cross-dataset performance, as measured by APCER (attack presentation classification error rate). Cross-dataset performance was worse than intra-dataset APCER when the model was trained on the other two datasets. The model trained on SOTERIA did not reach practically useful detection levels, however, with a BPCER (bona fide classification error rate) above 55 percent with the APCER set to 1 percent, meaning more than half of real biometrics presentations were rejected.
Further work using the new dataset is expected to evaluate the effect of different recording devices and environments and the impact of motion challenges on PAD, as well as gender-based performance disparities.
Authentication in VR
Idiap researchers have also published a paper on biometric authentication and PAD, which introduces a video dataset of periocular biometrics collected with a Meta Quest Pro VR headset.
The biometric VRBiom dataset consists of ten-second videos; 900 of them genuine and 1,104 presentation attacks.
“Assessing the Reliability of Biometric Authentication on Virtual Reality Devices” proposes baseline performance metrics, but also shows that periocular biometrics can be spoofed on VR headsets.
The researchers used ResNet34 and MobileFaceNet convolutional neural network (CNN) architectures with the new dataset, and found that the latter was more effective for PAD. However, similarly to the finding in the PAD paper above, setting a low APCER resulted in nearly every other bona fide presentation being rejected.
Both papers will be presented at the IEEE International Joint Conference on Biometrics (IJCB 2024), which will be held this September in Buffalo, New York.
Article Topics
biometrics | biometrics research | dataset | face biometrics | Idiap | IDnow | periocular biometrics | presentation attack detection
Comments