Trueface argues advantage of single model approach as mask detection added to face biometrics
In the midst of a series of announcements about biometric facial recognition systems being upgraded to detect if people are wearing masks, Trueface has taken the step of explaining not just that its technology has added this capacity, but how.
A Medium post titled “Trueface Tutorials: How to Train a Face Mask Detector with Less Than 1k Training Images” explains how the company was able to train its lightweight model to perform mask detection in the same workflow as face recognition, by training a small classifier on an “intermediate feature map of a pre-trained model.”
This is a different approach from two common methods, which is either training deep models from scratch, or fine-tuning them with a significant volume of new data. The approach Trueface took, according to the post, has the advantage of requiring only a small dataset, and resulting in a model that performs multiple tasks with only a minor increase in computational load compared to the original.
Trueface’s Resnet-based face recognition model detects face masks with the addition of a single layer.
“The single model reduces the memory footprint, therefore a client can manage a single model for multiple tasks on resource-limited devices,” post author and Trueface Head of Computer Vision Mosalam Ebrahimi explained to Biometric Update in an email.
Not only does the single model approach support deployment in a wide range of environments, it also simplifies implementation.
“The auxiliary models are very small software packages, that make delivering, installing, and hot patching them quicker/easier for customers,” according to Ebrahimi.
Trueface used 400 images from the LFW dataset, and placed a mask overlay over the nose and mouth area of each with the company’s RetinaFace-based face detector. This resulted in 400 images otherwise identical, but with a mask, for a total of 800. A subset of LFW images was held out to test the system’s performance, and after 50 epochs, the model achieved accuracy that Ebrahimi characterizes as impressive. A dataset of rendered synthetic faces from synthesis.ai were used to further evaluate the model’s generalization.
“Multi-task deep learning models are desirable for real-time applications as they are much faster than two separate models at the inference time,” Ebrahimi concludes in the post. “Modifying a base model to add a new branch for a relevant task also often requires a relatively small training dataset.”
Article Topics
biometrics | biometrics research | dataset | facial recognition | mask detection | training | Trueface
Comments