Multi-spectral face biometrics, emotion recognition and deepfakes discussed at EAB seminar
The state of the art in several areas related to face biometrics and computer vision were explored in the latest virtual lunch talk held by the European Biometrics Association (EAB).
Antitza Dantcheva, a Post Doctoral Fellow with INRIA’s Stars team, presented a selection of her research on ‘deciphering and generating faces,’ explaining insights from work relating to cybersecurity, healthcare and deepfakes to an EAB audience.
Facial images have caught the interest of people in a range of fields, including medicine as well as forensics and biometrics, because of the information they reveal. Video reveals even more, such as heart rate.
Dantcheva noted some of the commonalities between the biometrics research areas, as well as differences like the more constrained environments found in healthcare scenarios compared to cybersecurity. Face generation, she notes, is the newest among her areas of research.
The state of the art and potential use of additional spectra for robustness in facial recognition was discussed, and its applications in presentation attack detection. Translating between spectrums, such as infrared to visible, and then matching them with face biometrics also works fairly well, according to Dantcheva’s research, as the relevant features are differentiated in each state and preserved in translation.
A recent study by Dantcheva using face biometrics for healthcare attempted to detect apathy, which can be a sign of a range of neurological disorders. What may seem like a straightforward computer vision task, Dantcheva says, turns out to be much more challenging. The best result the research yielded for detecting the clinical symptom with biometrics-based technology in this way was 84 percent accuracy, and Dantcheva seems optimistic about the emotion recognition technology’s potential for assisting diagnosis. Part of the reason for this is the difficulty of assessing the symptom objectively, doctors told Dantcheva.
Face generation research by Dantcheva and her student has now progressed to video face generation using GANs, due to recent advances in the field. The good news is that perfect visual deepfake videos are still some ways away, but the field is advancing quickly.
The research has touched on a range of challenges, including head movement.
On the deepfake detection side, Dantcheva looked into the used of 3D convolutional neural networks like 3D ResNet to create models for recognizing generated images, with some success.
The next EAB virtual lunch talk on October 5 will address bias mitigation in anti-spoofing technologies.