FB pixel

Looking for biometric bias in the guts of AI

Looking for biometric bias in the guts of AI
 

A group of biometrics researchers in Spain say they have come up with a new, more efficient way to detect biased gender-detection models among facial recognition algorithms.

They studied the way unintended bias affects how algorithms learn, which ultimately controls performance, rather than examining biometric algorithm performance itself. In doing so, the researchers illustrated how there is a lack of needed transparency in artificial intelligence.

The team, working at the Universidad Autonoma de Madrid’s engineering school, created a method of detection that they call InsideBias. This software judges unintended (and, in practice, unfair) bias based on how a model represents information. Specifically, InsideBias analyzes filter activation in deep networks.

By looking at the process, rather than at the typical method of measuring an algorithm’s ultimate performance, researchers claimed to detect biased models using only 15 images out of a database of 72,000 face images spanning three crudely defined ethnic groups (white, black and Asian).

In a deep neural network, artificial neurons are activated when they receive an input. Looked at in layers, the higher the layer of activated neurons, the more granular the level of data compared of the input.

There were 22 layers in the research team’s gender-detection experiments, which involved images from two groups. One was whites and the other contained pictures of people from the other two broad demographic groups combined.

The model’s first layer compared texture and colors, and activation was about the same (high) for both groups, although images of whites saw a slightly better response. This trend continued through layer 13, when an accelerating divergence set in.

Activations for whites started high and ended higher. Activations based on the images in the other group slumped before plummeting at layer 22. This means the model was decreasingly able to sort genders as the levels of discriminating data became finer.

A lack of transparency in model building and operation is not serving the artificial-intelligence’s long-term interests, according to a Harvard University report about how the technologies are being developed.

The report focuses on privacy, but the implications are similar.

The author contends that privacy concerns are debated based on the performance of AI algorithms — in the case of his report, face recognition systems. But privacy problems are built into the process of creating algorithms, and they are being ignored, according to the report.

Transparency is necessary to create the highest-quality products and to build trust in a global society largely still unsure how it feels about AI.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

The ID16.9 Podcast

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics