Panasonic, academic researchers say data partitioning can reduce face biometrics bias
Two Panasonic divisions and a Singapore university have developed a way to train face biometrics algorithms that they say improves the performance of facial recognition for demographic groups represented by less training data, EE Times Asia reports.
The partner organizations are Panasonic Connect Co. Ltd, Panasonic R&D Center Singapore (Singapore Research Institute) and Singapore’s Nanyang Technological University (NTU).
A paper on the face biometrics training method, which they call “Invariant Feature Regularization for Fair Face Recognition,” has been accepted for publication by the International Conference on Computer Vision (ICCV) 2023.
The researchers say models tend to pick up spurious demographic-specific features. These can be removed through causal intervention, but making the required annotations is prohibitively expensive.
Their method involves generating “diverse data partitions iteratively in an unsupervised fashion,” according to the abstract. The data partitions act as a self-annotation feature to “deconfound” the model through “Invariant Feature Regularization (INV-REG).”
The researchers tested the method against the Masked Face Recognition Challenge evaluation dataset and found that error rates were reduced across four racial groups and images of females.
University of Wisconsin researchers looking into bias in technology and social media suggest that unconscious biases, left unchallenged due to by huge disparities in tech workforces, have contributed to disparate results in a wide range of systems, including facial recognition.
Researchers with the U.S. Department of Homeland Security recently found that while race and gender demographic disparities among face biometrics algorithms have improved, they are present among the majority of models.
Article Topics
accuracy | biometric-bias | biometrics | biometrics research | demographic fairness | facial recognition | Panasonic
Comments