Biometrics bias less than reported, Blink CEO says, but university to investigate AI
Blink Identity CEO Mary Haskett says any biometrics or AI that could lead to mass surveillance should be restricted because algorithms are highly accurate, even for people with different skin color, following an evaluation of NIST’s Face Recognition Vendor Test (FRVT) Part 3 Demographic Effects report. Haskett says algorithms should be used to support human decision making and not replace it.
NIST’s 2019 report tested over 200 different biometric algorithms, finding huge variation in performance between the best and worst performing. While some tested by NIST had extremely high error rates, the best performing algorithms have almost undetectable false positive demographic differences.
NIST’s work was motivated by studies of demographic effects in recent face recognition and gender estimation algorithms; quantifying the accuracy of face recognition algorithms for demographic groups defined by sex, age, and race or country of birth.
Biometric matching calculates the probability that two faces are the same, which means there can never be 100 percent certainty (though you can have more than 99.999 percent). The best performing algorithms in NIST’s test had non-significant false negative differences: 0.49 percent for black females and 0.85 for white males. Some reporting on this study has been misleading, however, according to Haskett, as the difference between the false positive rate of facial recognition algorithms on black females and white males is only .002 percent.
Highly accurate algorithms have smaller demographic differentials, though contemporary face recognition algorithms exhibit demographic differentials of various magnitudes. In February, allegations of poor performance for women and people of color by face biometrics systems resulted in legal action by The Lawyers Committee for Civil Rights Under Law.
Blink Identity’s algorithms fall into the “best performing” category on the NIST tests, according to the company, with no indication of practically significant demographic bias. NIST compared algorithms in a controlled environment using sets of matched pair faces as the data set, whereas in practice, environmental considerations such as pose angle, lighting and changing expression have a significant impact on accuracy as well, Blink Identity comments.
Texas University initiative aims to address demographic bias in AI, raise awareness
Code^Shift is a new data science lab at Texas A&M University, developed to help bridge the gap between systems and people, eliminating bias from technology in the process. Initially created to raise awareness through research and education as more of the world becomes automated and machines and systems are used to make decisions.
“Code^Shift tries to shift our thinking about the world of code or coding in terms of how we can be thinking of data more broadly in terms of equity, social healing, inclusive futures and transformation. A lot of trauma and a lot of violence has been caused, including by media and technologies, and first we need to acknowledge that, and then work toward reparations and a space of healing individually and collectively,” says Lab Director Srividya Ramasubramanian.
Ramasubramanian became aware of automated system biases when from applications like automated faucets that would not sense her hands. When working remotely during COVID-19, with speech recognition software failing to understand her accent and Zoom backgrounds failing to work with her hair and skin color.
In March findings presented at the Mozilla Festival suggested that bug bounty programs could be deployed to detect algorithmic bias in biometrics.
Code^Shift will attempt to confront this issue using a collaborative research model that includes Texas A&M experts in social science, data science, engineering and several other disciplines. Experts will work together to broaden public understanding of biases in machine learning and artificial intelligence.
Ramasubramanian recommends that to build an inclusive system, engineers need to include representative data from all populations and social groups which could help facial recognition algorithms learn to recognize people from more ethnic backgrounds. “To me, we should also be leaders in thinking about the ethical, social, health and other impacts of data.”
Engineers are not necessarily already collaborating with social scientists, which is where Code^Shift comes in; raising questions around accountability in how systems are created, who is able to access the data, and how it is being shared.
Article Topics
accuracy | AI | biometric identification | biometric-bias | biometrics | Blink Identity | facial recognition | regulation | research and development
Comments