Humans more likely to notice fakes among familiar faces, research suggests

Deepfakes and synthetic data, and how they impact biometric systems was the focus of the Norwegian Biometrics Laboratory Annual Workshop 2024, hosted by the EAB last week.
Presentations discussed benefits, challenges, and security threats to facial recognition from synthetic data.
Sascha Frühholz of the University of Oslo presented ongoing research being conducted in partnership with several other scientists under the title: “Human Cognitive and Neural Mechanisms for Identity Recognition in Synthetic Faces.”
Their work examines how people perceive identity across synthetic images in comparison to real faces, including what is happening from a neuroscience perspective.
Some past work indicates that people are capable of discriminating between synthetic and real people, and the brain sometimes indicates it can tell a difference even when the individual cannot consciously do so. People’s ability to tell the real and fake apart, however, is modest at best, and some studies indicate it is negligible, or even non-existent.
Research by Frühholz and partners bears out the latter point, that high-quality synthetic faces are judged by people as real nearly as often as genuine ones.
To advance the research, they sought “synthetic faces that had a certain percentage level of similarity to the original face.” Faces were generated along a continuum of similarity, from 0 to 100 percent similarity, at intervals of 20 percent. These were provided by Sebastian Marcel of Idiap.
Faces were also divided between familiar (celebrities) and unfamiliar ones.
They found that people took the longest to judge faces at 40 to 60 percent similarity, and the shortest amount of time with those of 100 percent or 0 percent similarity.
While people tend to correctly match faces with 100 percent similarity and perceive the difference between faces with low similarity, Frühholz says there are insights to be gained from the middle of the curve. People were more conservative with matching familiar faces to those with similarity between 20 and 80 percent, perhaps more easily noticing differences. For unfamiliar faces with 40 percent similarity, study participants were still slightly more likely than not to identify the images as the same person.
The preliminary examination of brain activity (in the fusiform face area, occipital face area and superior temporal sulcus) during these processes suggests that different processes are happening when familiar and unfamiliar faces are being assessed as matching or not, at least for difficult judgments.
The different areas of the brain are associated with different functions. The FFA analyzes faces holistically. The OFA is active when people consider the details of a face, as in more difficult comparisons. The STS is active when the individual tries to place a face into a social context. Frühholz’ research indicates that the OFA is used more for difficult matching decisions with unfamiliar faces, while the FFA and STS are activated in different degrees according to the similarity or dissimilarity of the faces presented.
A deepfake detection collaboration between PXL Vision and the Idiap Research Institute was announced earlier this week.
Article Topics
biometrics | biometrics research | deepfakes | EAB | EAB 2024 | European Association for Biometrics | synthetic data | synthetic faces
Comments