Dividing face images makes for better biometric presentation attack detection
Seeing the big picture is useful in completing some tasks, but it introduces extraneous details that can confuse matters on the ground. Apparently, the same is true in presentation attack detection for biometric verification.
Industry-funded research in Turkey indicates that it can be more effective to train deep learning-based presentation attack detection models using only small square patches from live and spoof facial images.
That means tightly cropping facial images, real or fake, to remove as much non-face biometric data, and then breaking that image into 32 x 32-pixel patches. The squares are stitched into larger image sets that include the genuine articles as well as patches from manufactured faces.
In experiments, patches were assembled randomly or in a design, and random patterns worked better.
Two of three researchers working on the project were from Istanbul Technical University‘s Computer Engineering Department. The third works for Sodec Technologies, an Istanbul-based KYC software firm.
The team used four data sets in its research, one of which was Real-World, developed by Sodec, which provided a research grant and helped collect images. The Technological Research Council of Turkey also supported the work.
Convolutional neural networks are commonly used in training models for presentation attack detection, write the researchers, but in this area, at least, they really only work well on intra-data sets.
They take subtle cues from collective background information in a data set, which might create a dynamic like that when a horse appears to be able to count, but really is only reading subtle signals from its trainer.
Add a new biometric trainer, or in this case, live data, and results are less interesting.
Cropping face images as tightly as possible to minimize other data and then breaking them up forces the model to focuses entirely on the most important bits. It also means researchers can use data sets containing fewer subjects overall.
Some patches have too little information to train, such as foreheads, and were struck from the collections.
biometrics | biometrics research | deep learning | presentation attack detection | spoof detection