FB pixel

Researchers describe profound complexity of biometrics ethics puzzle

Categories Biometric R&D  |  Biometrics News
Researchers describe profound complexity of biometrics ethics puzzle

No one knowledgeable expected it would be easy to take the bias out of AI, but it is common even today for insiders to wave their hands dismissively about the task when talking about the future of biometrics and other algorithms.

Progress in bias, ethics and other complex concepts in AI is assumed in the way growth in chip speeds and feeds is assumed. Apparently, that view is naïve in an age when people are growing more distrustful of AI generally and biometrics specifically.

In fact, a recent University of Texas at Dallas study finds the problem of bias may be as intractable, as deeply rooted in AI as it is in human nature.

On the plus side, just outlining the task of imbuing AI with ethics is worthwhile. And, accordingly, more university programs are starting to tailor their courses to groom future industry leaders who take for granted the need to ameliorate algorithm biases.

The study’s lead author, from UT Dallas’ School of Behavioral and Brain Sciences, explained in an article this month that the scale of the goal is profound.

The “Whiteness” of an image data set is a well-known trap, said psychological sciences doctoral student Jacqueline Cavazos, but the research team found that lower quality image pairs result in more pronounced racial bias.

Senior author Alice O’Toole, from the School of Behavioral and Brain Sciences, said that different measures of performance from the same algorithm can indicate bias or equity.

The UT work should be welcome data for students entering the University of Cambridge’s inaugural master’s degree in managing the risks of AI, the first such program in the UK. The school is calling it an AI ethics program.

Does this effort to dig deep into the bias issue have legs? One vendor’s chief executives seems to think so.

Robert Prigge at Jumio, said it has become a brand issue for companies investing in the technology.

It is a potential legal landmine, of course, he said. But customer sentiment can be affected, which is something companies cannot escape with an undisclosed settlement and a non-disclosure agreement.

If nothing else, AI black boxes need to be opened in the light more often. What executives do not know about their AI could devastate their firms.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics