FB pixel

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

EAB talk by AFC Lab founder explains
Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms
 

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few positive ones, as explained in the latest lunch talk from the European Association for Biometrics (EAB).

Human Deepfake Processing – Insights from Super-Recognizers, Law Enforcement and General Society” was presented by Dr. Meike Ramon, founder of the Applied Face Cognition Lab. Ramon is a professor at the Bern University of Applied Sciences, and researches the neuroscience of human cognition for perception and memory.

Facial identity processing is among the most complex visual tasks, Ramon says, but human brains are uniquely equipped for the task.

People’s ability to recognize faces is largely genetic, she explains, with research showing roughly 70 to 90 percent of differences between individuals explainable through genetics.

Human brains and eyes have evolved to recognize and identify faces, but discriminating between different people is a conceptually new task. That makes processing unfamiliar faces, such as for forensic purposes, is relatively inconsistent and prone to errors. But this is what border security officials manually checking passports are asked to do at scale.

Super-recognizers are an exception, and were discovered in 2009 during research to develop a test for face blindness (prosopagnosia).

Ramon and her lab have looked into what makes them different, and Ramon helped develop the Berlin test for super-recognizer identification, or beSure, for the Berlin police.

Super-recognizers and deepfakes

One of Ramon’s areas of research is the relationship between face identity processing and deepfake detection performance.

Super-recognizers are identified through a series of three tests. One asks them to group faces that belong to the same person from among a set of many images each of several people. Another asks subjects to match photos of people to an initial set of images from when they were years younger. A third asks subjects to learn faces viewed from multiple angles and then recognize the face amongst “distractors.”

The experiment compared super-recognizers to “organic traffic” and police officers who completed the beSure test. Each were shown a pair of videos and asked which was fake, and some were shows a single video and asked if it was real or a deepfake.

The research revealed that taking longer to detect deepfakes does not correspond to higher accuracy. On the contrary, performance actually decreased as responses took longer.

It also shows that super-recognizers fair roughly the same at detecting deepfakes as the control group. Further, people who scored highly on the beSure test did not prove more effective at identifying deepfakes than those with low scores.

Since the phenomenon of super-recognizers was unknown until it was accidentally discovered, Ramon notes, there may be an equivalent class of super deepfake-recognizers who could be found through a different test.

How people perceive and discriminate between synthetic identities

Another study co-authored by Ramon looks into whether synthetic faces are perceived the same way as real ones.

Observers found matching the identities of synthetic and natural faces roughly equally difficult, and the facial similarity of the images affects discrimination efficiency in the same way. This suggests that people carry out perceptual discrimination of faces similarly for synthetic and natural identities.

This is good news for performing research with synthetic faces that yields insights about real faces, as well as for the prospect of making training datasets for facial recognition and related AI models more diverse with synthetic data.

Ramon concluded with an invitation to attendees for participation in collaborative and adversarial research.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

 

Entrust face biometrics show major gains in NIST FRTE

A face biometrics algorithm submitted by Entrust to the NIST Face Recognition Technology Evaluation (FRTE) 1:1 Verification has made significant…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events