FB pixel

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

EAB talk by AFC Lab founder explains
Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms
 

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few positive ones, as explained in the latest lunch talk from the European Association for Biometrics (EAB).

Human Deepfake Processing – Insights from Super-Recognizers, Law Enforcement and General Society” was presented by Dr. Meike Ramon, founder of the Applied Face Cognition Lab. Ramon is a professor at the Bern University of Applied Sciences, and researches the neuroscience of human cognition for perception and memory.

Facial identity processing is among the most complex visual tasks, Ramon says, but human brains are uniquely equipped for the task.

People’s ability to recognize faces is largely genetic, she explains, with research showing roughly 70 to 90 percent of differences between individuals explainable through genetics.

Human brains and eyes have evolved to recognize and identify faces, but discriminating between different people is a conceptually new task. That makes processing unfamiliar faces, such as for forensic purposes, is relatively inconsistent and prone to errors. But this is what border security officials manually checking passports are asked to do at scale.

Super-recognizers are an exception, and were discovered in 2009 during research to develop a test for face blindness (prosopagnosia).

Ramon and her lab have looked into what makes them different, and Ramon helped develop the Berlin test for super-recognizer identification, or beSure, for the Berlin police.

Super-recognizers and deepfakes

One of Ramon’s areas of research is the relationship between face identity processing and deepfake detection performance.

Super-recognizers are identified through a series of three tests. One asks them to group faces that belong to the same person from among a set of many images each of several people. Another asks subjects to match photos of people to an initial set of images from when they were years younger. A third asks subjects to learn faces viewed from multiple angles and then recognize the face amongst “distractors.”

The experiment compared super-recognizers to “organic traffic” and police officers who completed the beSure test. Each were shown a pair of videos and asked which was fake, and some were shows a single video and asked if it was real or a deepfake.

The research revealed that taking longer to detect deepfakes does not correspond to higher accuracy. On the contrary, performance actually decreased as responses took longer.

It also shows that super-recognizers fair roughly the same at detecting deepfakes as the control group. Further, people who scored highly on the beSure test did not prove more effective at identifying deepfakes than those with low scores.

Since the phenomenon of super-recognizers was unknown until it was accidentally discovered, Ramon notes, there may be an equivalent class of super deepfake-recognizers who could be found through a different test.

How people perceive and discriminate between synthetic identities

Another study co-authored by Ramon looks into whether synthetic faces are perceived the same way as real ones.

Observers found matching the identities of synthetic and natural faces roughly equally difficult, and the facial similarity of the images affects discrimination efficiency in the same way. This suggests that people carry out perceptual discrimination of faces similarly for synthetic and natural identities.

This is good news for performing research with synthetic faces that yields insights about real faces, as well as for the prospect of making training datasets for facial recognition and related AI models more diverse with synthetic data.

Ramon concluded with an invitation to attendees for participation in collaborative and adversarial research.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

The King’s Speech signals that digital ID in the UK is a go – again

The King hath spoken: his ministers will “proceed with the introduction of Digital ID that will modernise how citizens interact…

 

Digital ID program gets $650M for expansion in Australian federal budget

The Australian government’s 2026-27 Federal Budget includes a major financial commitment to digital ID, in stating that “the Government is…

 

Age assurance industry juggles global headlines, major disruptions at 2026 GAASS

The 2026 Global Age Assurance Standards Summit marked both the arrival of age assurance onto the global main stage, and…

 

Met Police tout arrests, crime drop from permanent LFR camera pilot

The London police have published the results of the UK’s first permanent live facial recognition (LFR) test: During the six-month…

 

Alcatraz AI adds automation, alerts to facial biometric access platform

Alcatraz AI, a facial biometric authentication provider for physical access, has announced a set of platform updates that add audible…

 

Privacy fears rise in New Zealand over AI, biometric data use

A new survey shows that for New Zealanders, concerns about biometric technology and children’s online safety are now common. As…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events