FB pixel

IDnow reduces bias in facial recognition with EU-funded MAMMOth project

IDnow reduces bias in facial recognition with EU-funded MAMMOth project
 

Identity verification provider IDnow reports significant progress in reducing algorithmic bias in facial recognition systems, following its participation in the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project.

The company notes the widely-cited 2018 “Gender Shades” study from MIT Media Lab prompted the search for more inclusive data and better model calibration. Biometric testing by the National Institute of Standards and Technology (NIST) has found that the majority of facial recognition algorithms are more likely to misidentify people with darker skin, women and the elderly, though the most accurate algorithms show very low differentials in the Institute’s latest testing.

As part of MAMMOth, IDnow focused on identifying and mitigating bias in its own facial recognition algorithms. One key challenge was the variation in skin tone representation caused by ID photo color adjustments, which can distort comparisons between selfies and official documents. To address this, IDnow applied a style transfer technique to diversify its training data, improving model resilience and reducing bias toward darker skin tones.

Other tools developed by IDnow address bias at different levels, such as biometric matching algorithms.

The results were notable: verification accuracy increased by 8 percent, even while using only 25 percent of the original training data volume. The accuracy gap between lighter and darker skin tones was cut by more than 50 percent. The enhanced AI model was integrated into IDnow’s identity verification platform in March 2025 and has been in active use since.

“Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application,” says Montaser Awal, director of AI and ML at IDnow. “By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.”

IDnow plans to adopt the MAI-BIAS open-source toolkit developed during the project to evaluate fairness in future AI models. This will allow the company to document biometric bias mitigation efforts and ensure consistent standards across markets.

“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” Awal added. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”

The nuances of AI bias and discrimination

The MAMMOth project, supported by Horizon Europe, brought together leading academic and industry partners to tackle fairness in artificial intelligence across multiple modalities. The European research project ran for 36 months, concluding this month, and set out to tackle gender, race and other biases in AI. It is part of a broader European Union injunction that prohibits discrimination in EU law.

As AI becomes more ubiquitous in various domains including health, education, justice, personal security, work and so on, MAMMOth sought to identify a list of characteristics that are not protected under the law, but which have been shown to lead to bias in AI systems. Such biases have been studied and documented in leading academic journals, which MAMMOth references.

For example, these characteristics could be school grades and other personal details collected about the offender, like living situation, in a justice system context. In healthcare, attributes shown to be potentially connected to bias in AI systems include disability, age, native language and dialect, sexuality and socioeconomic status, among others.

Since AI trains on available data, biases could become further entrenched. For example, it is well known that psychology studies disproportionately draw their participants from Western, Educated, Industrialized, Rich and Democratic (WEIRD) societies. This reliance may mean the findings do not accurately reflect the responses of individuals from diverse cultural backgrounds.

More on the characteristics can be found on the MAMMOth website here.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Do biometrics hold the key to prison release?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner In the criminal justice setting there are two questions in…

 

New digital identity verification market report forecasts dramatic change and growth

The latest report from Biometric Update and Goode Intelligence, the 2025 Digital Identity Verification Market Report & Buyers Guide, projects…

 

Live facial recognition vans spread across seven additional UK cities

UK police authorities are expanding their live facial recognition (LFR) surveillance program, which uses cameras on top of vans to…

 

Biometrics ease airport and online journeys, national digital ID expansion

Biometrics advances are culminating in new kinds of experiences for crossing international borders and getting through online age gates in…

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events