FB pixel

Researchers describe profound complexity of biometrics ethics puzzle

Categories Biometric R&D  |  Biometrics News
Researchers describe profound complexity of biometrics ethics puzzle
 

No one knowledgeable expected it would be easy to take the bias out of AI, but it is common even today for insiders to wave their hands dismissively about the task when talking about the future of biometrics and other algorithms.

Progress in bias, ethics and other complex concepts in AI is assumed in the way growth in chip speeds and feeds is assumed. Apparently, that view is naïve in an age when people are growing more distrustful of AI generally and biometrics specifically.

In fact, a recent University of Texas at Dallas study finds the problem of bias may be as intractable, as deeply rooted in AI as it is in human nature.

On the plus side, just outlining the task of imbuing AI with ethics is worthwhile. And, accordingly, more university programs are starting to tailor their courses to groom future industry leaders who take for granted the need to ameliorate algorithm biases.

The study’s lead author, from UT Dallas’ School of Behavioral and Brain Sciences, explained in an article this month that the scale of the goal is profound.

The “Whiteness” of an image data set is a well-known trap, said psychological sciences doctoral student Jacqueline Cavazos, but the research team found that lower quality image pairs result in more pronounced racial bias.

Senior author Alice O’Toole, from the School of Behavioral and Brain Sciences, said that different measures of performance from the same algorithm can indicate bias or equity.

The UT work should be welcome data for students entering the University of Cambridge’s inaugural master’s degree in managing the risks of AI, the first such program in the UK. The school is calling it an AI ethics program.

Does this effort to dig deep into the bias issue have legs? One vendor’s chief executives seems to think so.

Robert Prigge at Jumio, said it has become a brand issue for companies investing in the technology.

It is a potential legal landmine, of course, he said. But customer sentiment can be affected, which is something companies cannot escape with an undisclosed settlement and a non-disclosure agreement.

If nothing else, AI black boxes need to be opened in the light more often. What executives do not know about their AI could devastate their firms.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events