FB pixel

Cutting out clustered data points to reduce facial recognition bias explained in AFRL talk

Howard pitches bias reduction method to the community
Cutting out clustered data points to reduce facial recognition bias explained in AFRL talk
 

Despite the presence of many “heavy hitters” in the public discourse about bias in facial recognition, misunderstanding about the nature of the problem and where it comes from is widespread, according to a recent talk as part of the Applied Face Recognition Lab’s virtual talk series.

John Howard, principal data scientist of the Identity and Data Sciences Laboratory at the Maryland Test Facility, delved into the issue in a presentation titled ‘Understanding and Mitigating Bias in Human & Machine Face Recognition.’

While many observers, including many working in computer science and machine vision, emphasize the role of data as a cause of bias in biometric algorithm performance, Howard notes that there are many possible sources.

“I also think just blaming the data is, frankly, a way to dodge what are probably more challenging and more interesting issues,” Howard explains. This is a tendency that is attractive, because it leads to a resolution that data scientists are used to and comfortable with; the ingestion of more data.

Loss function, evaluation bias, and the way that people relate to machines are important to a more complete understanding of the issue of bias in facial recognition, Howard argues. The latter issue includes projection bias, confirmation bias, and automation bias. In other words, people tend to expect machines to behave like them, confirm their beliefs, and produce results that do not need to be verified.

Face is a newer biometric modality than fingerprint and iris, Howard, says, and lessons can possibly be taken from the two older modalities of the “big three.” “Unique problems” may be presented by elements unique to the face modality, however.

The false matches produced by iris recognition algorithms, for instance, often cross between genders and ethnicities, while those in face do not. This makes it harder for people to spot errors in face matching, despite the same terminology (“false match error”) being used in each case.

Howard reviewed several research papers demonstrating how different biases. Automation bias is modest, and in ideal circumstances shows up mostly when people are unsure, for instance. When circumstances are less ideal, like when people are wearing masks, people are more likely to privilege a computer’s assessment.

He also reviewed the effect of “broad homogeneity” and findings of NIST’s FRVT Part 3, which assesses bias in algorithms on an individual basis.

Ultimately, while faces do contain similar or ‘clustered’ data based on demographics, Howard emphasizes that research indicates it is possible to select particular data points that do not exhibit clustering, to reduce the false matches errors that amount to bias in face biometrics, particularly when a human is in the loop. This is because the algorithms return candidate lists that suddenly look more like those in fingerprint and iris recognition. The right candidate, in many cases, is obvious to the human eye.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Municipal ID programs offer ID to undocumented people, and ICE wants their data

Amid the ongoing collapse of democratic norms in the U.S., it is easy to miss a nightmare scenario unfolding for…

 

Unissey levels-up biometric injection attack detection certification

Unissey’s face biometrics have been certified to substantial-level compliance with the European biometric injection attack detection (IAD) standard. Injection attacks…

 

Hey babe, check out my regulations: porn star, VerifyMy spice up UK Online Safety Act

It’s one thing when Christian moralists lobby for age assurance laws – but another thing entirely when the voices are…

 

Regula launches dedicated biometric morph attack detector

A new face morphing detector has been unveiled by Regula to defend against the significant security threat of passports and…

 

UK regulator fines 23andMe over massive genetic data breach

The U.K. Information Commissioner’s Office (ICO) has fined U.S.-based 23andMe £2.31 million for serious security failures that resulted in a…

 

Tonga reveals MOSIP and VS One World foundations of DPI success

Tonga launched its TongaPass digital ID and digital government portal this month. The government is now ramping up registration as…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events