FB pixel

Cutting out clustered data points to reduce facial recognition bias explained in AFRL talk

Howard pitches bias reduction method to the community
Cutting out clustered data points to reduce facial recognition bias explained in AFRL talk
 

Despite the presence of many “heavy hitters” in the public discourse about bias in facial recognition, misunderstanding about the nature of the problem and where it comes from is widespread, according to a recent talk as part of the Applied Face Recognition Lab’s virtual talk series.

John Howard, principal data scientist of the Identity and Data Sciences Laboratory at the Maryland Test Facility, delved into the issue in a presentation titled ‘Understanding and Mitigating Bias in Human & Machine Face Recognition.’

While many observers, including many working in computer science and machine vision, emphasize the role of data as a cause of bias in biometric algorithm performance, Howard notes that there are many possible sources.

“I also think just blaming the data is, frankly, a way to dodge what are probably more challenging and more interesting issues,” Howard explains. This is a tendency that is attractive, because it leads to a resolution that data scientists are used to and comfortable with; the ingestion of more data.

Loss function, evaluation bias, and the way that people relate to machines are important to a more complete understanding of the issue of bias in facial recognition, Howard argues. The latter issue includes projection bias, confirmation bias, and automation bias. In other words, people tend to expect machines to behave like them, confirm their beliefs, and produce results that do not need to be verified.

Face is a newer biometric modality than fingerprint and iris, Howard, says, and lessons can possibly be taken from the two older modalities of the “big three.” “Unique problems” may be presented by elements unique to the face modality, however.

The false matches produced by iris recognition algorithms, for instance, often cross between genders and ethnicities, while those in face do not. This makes it harder for people to spot errors in face matching, despite the same terminology (“false match error”) being used in each case.

Howard reviewed several research papers demonstrating how different biases. Automation bias is modest, and in ideal circumstances shows up mostly when people are unsure, for instance. When circumstances are less ideal, like when people are wearing masks, people are more likely to privilege a computer’s assessment.

He also reviewed the effect of “broad homogeneity” and findings of NIST’s FRVT Part 3, which assesses bias in algorithms on an individual basis.

Ultimately, while faces do contain similar or ‘clustered’ data based on demographics, Howard emphasizes that research indicates it is possible to select particular data points that do not exhibit clustering, to reduce the false matches errors that amount to bias in face biometrics, particularly when a human is in the loop. This is because the algorithms return candidate lists that suddenly look more like those in fingerprint and iris recognition. The right candidate, in many cases, is obvious to the human eye.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Huawei accelerates global digital transformation with digital ID, ICT and 5G projects

In recent years, global telecommunications provider Huawei Technologies has significantly ramped up its involvement in infrastructure projects aimed at supporting…

 

Move over, Armani: Italy’s It Wallet is the digital ID accessory of the season

Bravo to Italy for the forthcoming launch of its digital wallet scheme, clearing the path for a national digital identity…

 

Sumsub brings non-doc biometric identity verification to new markets

Biometric identity verification providers are expanding their market reach in various ways, including Sumsub’s support for users without ID documents…

 

Scottish Government emphasizes security of new platform for digital public services

Scotland is talking up the data security measures designed and build into ScotAccount, a single-sign on (SSO) service designed to…

 

Zambia, Namibia, Tanzania upgrade digital ID systems in concert with development partners

Zambia has carried out a major first step in its transition towards a modern legal and digital identity system, by…

 

Bermuda delays facial recognition deployment for national CCTV project

Bermuda’s government will not be deploying facial recognition capabilities in its CCTV system, at least for now, due to unspecified…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events