FB pixel

Idemia paper pitches framework for broader consideration of biometrics excellence

Urges bias mitigation to increase public confidence
Categories Biometric R&D  |  Biometrics News
Idemia paper pitches framework for broader consideration of biometrics excellence
 

Artificial intelligence algorithms should be more broadly evaluated in order to ensure systems deliver results that will inspire confidence in biometrics and other advanced technologies, according to a new position paper from a market leader. Training data and systems configuration could use some work too.

Six Key Factors in Algorithm Excellence,’ written by Idemia VP of Innovation and Client Engagement Teresa Wu, sets out criteria for selecting AI technologies for digital identity and biometric applications.

Problems with algorithms go beyond simplistic considerations of accuracy, Wu writes, with algorithmic bias as a ready example of a negative characteristic too-often present in AI systems.

She proceeds to make six recommendations for algorithm developers to pursue for “optimum performance,” each one targeted at a specific risk listen in NIST’s AI Risk Management Framework. The recommendations are intended to make up a “qualification framework that extends beyond technical considerations to assess whether an identity and biometrics technology company is pursuing excellence in development of its AI-based algorithm.”

The need for transparency, performance over time and across multiple tests, “experience and robustness in the field,” security and privacy, and fairness and ethics commitments are each explained. Wu notes that biometric systems used for identification and verification should be tested for both, and lays the blame for the recent wave of legislation restricting facial recognition use by states and municipalities with those algorithms that exhibit bias, and can cause exclusion from services.

Wu also highlights the importance of sourcing training data ethically, and also configuring technology properly in order to avoid introducing bias.

A real-world example and a warning from academia

An example of results that reflect bias in an AI system, whether introduced through the algorithm or the implementation, a woman living in Manitoba, Canada, had to have her photo color-corrected by driver’s license authority staff after the system flagged the image as depicting an “unnatural” skin tone, CBC reports.

Manitoba Public Insurance suggested the error could be caused by focus and lighting problems preventing the photo from meeting its “facial recognition software and photo standards.”

Tolu Ilelaboye, an African-Canadian from Winnipeg, is unhappy with the response from the public agency, which says that when image capture problems occur, employees adjust camera settings and other factors. Ilelaboye says that was not done in her case, and suggested that better employee training could have resolved the issue. MPI also says skin tone is not a reason for a license photo to be rejected.

It could be worse. A paper presented at the 2022 ACM Conference on Fairness, Accountability, and Transparency showed that robots trained on “foundational models” used in computer vision, natural language processing or both adopted bigoted stereotypes about people based on their skin color or gender.

The authors of ‘Robots Enact Malignant Stereotypes’ find that the institutional policies must be put in place to reduce the harms caused by large datasets and ‘Dissolution Models,’ and issue a call for collaborative action to address bias and other harmful behaviors being built into some computing systems.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

UK Online Safety Act passes first enforcement deadline, threatening big fines

One of the main reasons regulations are not especially popular among ambitious CEOs is that they can cost money. This…

 

Digital ID, passkeys are transforming Australian government services

Tax has gone digital in Australia, where businesses now need to use the Australian Government Digital ID System to verify…

 

Biometrics ‘the lynchpin of where gaming companies need to be,’ says gambling executive

Online gambling continues to be a fruitful market for biometrics providers, as betting platforms seek secure and frictionless KYC, onboarding,…

 

Surveillance, identity and the right to go missing

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Do we have a right to go missing? The global…

 

NADRA and NIRA work to advance Somalia’s digital identification program

Pakistan’s National Database and Registration Authority (NADRA) remains committed to helping Somalia reach new milestones in its national ID card…

 

Advanced deepfake defenses mustering in India, US, South Korea

Digital threats are global threats. As deepfakes generated with generative AI algorithms flood the online space, governments and private companies…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events