FB pixel

Idemia paper pitches framework for broader consideration of biometrics excellence

Urges bias mitigation to increase public confidence
Categories Biometric R&D  |  Biometrics News
Idemia paper pitches framework for broader consideration of biometrics excellence
 

Artificial intelligence algorithms should be more broadly evaluated in order to ensure systems deliver results that will inspire confidence in biometrics and other advanced technologies, according to a new position paper from a market leader. Training data and systems configuration could use some work too.

Six Key Factors in Algorithm Excellence,’ written by Idemia VP of Innovation and Client Engagement Teresa Wu, sets out criteria for selecting AI technologies for digital identity and biometric applications.

Problems with algorithms go beyond simplistic considerations of accuracy, Wu writes, with algorithmic bias as a ready example of a negative characteristic too-often present in AI systems.

She proceeds to make six recommendations for algorithm developers to pursue for “optimum performance,” each one targeted at a specific risk listen in NIST’s AI Risk Management Framework. The recommendations are intended to make up a “qualification framework that extends beyond technical considerations to assess whether an identity and biometrics technology company is pursuing excellence in development of its AI-based algorithm.”

The need for transparency, performance over time and across multiple tests, “experience and robustness in the field,” security and privacy, and fairness and ethics commitments are each explained. Wu notes that biometric systems used for identification and verification should be tested for both, and lays the blame for the recent wave of legislation restricting facial recognition use by states and municipalities with those algorithms that exhibit bias, and can cause exclusion from services.

Wu also highlights the importance of sourcing training data ethically, and also configuring technology properly in order to avoid introducing bias.

A real-world example and a warning from academia

An example of results that reflect bias in an AI system, whether introduced through the algorithm or the implementation, a woman living in Manitoba, Canada, had to have her photo color-corrected by driver’s license authority staff after the system flagged the image as depicting an “unnatural” skin tone, CBC reports.

Manitoba Public Insurance suggested the error could be caused by focus and lighting problems preventing the photo from meeting its “facial recognition software and photo standards.”

Tolu Ilelaboye, an African-Canadian from Winnipeg, is unhappy with the response from the public agency, which says that when image capture problems occur, employees adjust camera settings and other factors. Ilelaboye says that was not done in her case, and suggested that better employee training could have resolved the issue. MPI also says skin tone is not a reason for a license photo to be rejected.

It could be worse. A paper presented at the 2022 ACM Conference on Fairness, Accountability, and Transparency showed that robots trained on “foundational models” used in computer vision, natural language processing or both adopted bigoted stereotypes about people based on their skin color or gender.

The authors of ‘Robots Enact Malignant Stereotypes’ find that the institutional policies must be put in place to reduce the harms caused by large datasets and ‘Dissolution Models,’ and issue a call for collaborative action to address bias and other harmful behaviors being built into some computing systems.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

India mandates medical colleges to issue ABHA patient IDs in digital health push

India’s National Medical Commission (NMC) has directed that all medical colleges must generate and issue patient IDs to all those…

 

Australia expands age checks to AI chatbots, app stores, porn sites and more

After becoming the first country to ban under-16s from social media, Australia has now gone further by implementing one of…

 

Age verification fight erupts as Congress moves to regulate online spaces for children

New proposals would require stronger safeguards across digital platforms while placing age verification at center of effort to protect minors…

 

CLEAR brings biometric identity checks to Mount Sinai hospitals amid privacy scrutiny

Clear Secure Inc., the biometrics company that made its name speeding travelers through airport lines, is pushing deeper into health…

 

Cybernetica, Tony Blair Institute pilot digital credential wallet in Kenya

A proof-of-concept to implement a verifiable credentials (VC) system to fight a growing wave of academic and public service recruitment…

 

Aseel’s digital ID-verified aid delivery expands its ecosystem

Aseel, a Washington D.C.–based humanitarian technology platform, is expanding its digital ecosystem as it pursues a goal of supporting one…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events