What information is stored in face biometric templates? EAB explores
Deeply-learned face representations enable the success of current facial recognition systems today.
Despite the ability of facial recognition systems representations to encode the identity of an individual, however, recent works have shown that much information is stored within them.
A webinar organized by the European Association for Biometrics (EAB) and led by Philipp Terhörst, Research Scientist at Fraunhofer IGD, examined these issues on Tuesday.
The talk showed how many soft-biometric attributes are embedded in face biometric templates and that these attributes often have a strong correlation to face verification performance.
The event was divided into three parts, each answering a different question.
What information is stored in face templates?
According to Terhörst, the main information stored in face biometric templates include demographics, image characteristics, and social traits.
The research conducted by the Fraunhofer IGD scientist analyzed two face templates regarding 113 attributes, and concluded that 74 of them could easily be predicted, particularly non-permanent ones.
Information stored in Facenet (2015) templates: Credit Terhörst et all
Terhörst achieved this by training a massive attribute classifier (MAC) to jointly predict multiple attributes, such as face shape, beard type, and whether the individual is wearing lipstick or not.
The biometric algorithms tested during the experiments were LFW and CelebA. Terhörst’s team analysed 13 thousand images from over five thousand individuals and up to 73 attribute annotations from LFW. The CelebA testing took into consideration 200 thousand images from over ten thousand celebrities and 40 binary attributes.
Information stored in Arcface (2019) templates: Credit: Terhörst et all
The tests were carried out via FaceNet and ArcFace embeddings, and showed that head pose and social traits were the easiest to predict.
Face geometry, nose, and image quality, among others, were also predictable, while skin, mouth and environment were the hardest traits to predict.
How does it relate to fairness in facial recognition?
In order to assess these biases, Terhörst then proceeded to analyze the influence of soft-biometric attributes on the performance of facial recognition algorithms, particularly ArcFace and FaceNet.
To do so, the scientist’s team used a database named MAAD-Face holding a high number of face images, featuring several and high-quality attribute annotations.
These included six positive/negative control groups for each attribute, that were created by randomly selecting samples from the database. These synthetic groups had the same number of samples as their positive/negative counterparts.
“For example,” Terhörst explained, “if we had ten thousand sample images with eyeglasses, and ninety thousand without, for the positive control group we would just look for randomly selected ten thousand samples, and for the negative control group randomly selected ninety thousand.”
The results of analyzing both facial recognition algorithms through the control groups showed that, in terms of demographics, middle-aged, senior, white, and male individuals showed higher recognition accuracy rates than young, Asian, Black, and female individuals.
When visibility-related attributes were concerned, fully visible forehead, receding hairline, bald, non-eyeglass-wearing individuals scored higher accuracy rates than those with an obstructed forehead, bangs, eyeglasses and wavy hair.
Temporary attributes like hats, earrings, lipstick and eyeglasses also reduced the precision of the facial recognition algorithms, while arched eyebrows, big or pointy nose, bushy eyebrows, double chin and high cheekbones were responsible for higher accuracy rates.
Both algorithms also scored better with smiling faces and closed mouths, as opposed to individuals’ non-neutral expressions. According to Terhörst, this might be due to the fact that a vast number of images from the database were from smiling celebrities.
Additional biases regarded the colors of users’ hair and eye color, and whether they had a beard or not.
How can biases in facial recognition be mitigated?
Knowing encoded information in face templates might help to develop bias-mitigating solutions, Terhörst explained, proceeding to the third part of the webinar.
According to the scientist, however, previous works in this field required labels of the bias-related attributes beforehand, and could only mitigate specific biases.
These actions have also reportedly been known to degrade the overall performance of facial recognition algorithms, as well as present difficulties in integration into existing systems.
A possible alternative to traditional systems would be fair score normalization (FNS). The technique, Terhörst explained, can operate on unlabelled data, and effectively mitigate biases of unknown origin.
FNS can allegedly also improve the performance of facial recognition systems considerably, and can be integrated easily into existing systems.
How FNS works. Credit: Terhörst et all
Terhörst then concluded the event by taking questions from the audience.
The webinar was part of the EAB virtual events series on Demographic Fairness in Biometric Systems. Registration is still available for the events on March 15 and 30 to EAB members and non-members, with a discount for members.
Article Topics
algorithms | ArcFace | biometric data | biometric identification | biometric template | biometrics | biometrics research | dataset | demographic fairness | EAB | European Association for Biometrics | FaceNet | facial recognition | training
Comments