Careful procurement, data collection help avoid common biometrics challenges
The Department of Homeland Security (DHS) and Idemia have addressed critical issues surrounding the security and transparency of artificial intelligence (AI) in biometric algorithms. This is in response to the White House’s recent executive order establishing new standards for AI safety and security, as well as the DHS’s IT strategic plan, which prioritizes the development of AI systems that are safe, secure, trustworthy, and free from algorithmic discrimination.
In a webinar this week, professionals from Transportation Security Administration (TSA) and Idemia highlight the critical need to address algorithmic bias in AI-powered biometric systems, particularly in facial recognition software. The discussion emphasizes the importance of understanding how race, gender, and other demographic factors contribute to bias, and the necessity of establishing industry-wide standards and best practices to mitigate these issues.
The conversation also centered around the importance of evaluating and improving AI-powered biometric systems to ensure fairness, robustness, and accuracy, and balancing innovation with stringent security requirements. Additionally, the speakers highlight the need for transparency in the procurement process and the importance of maintaining human interaction and consideration in the application of biometrics.
Evan Bays, vice president of engineering and DOJ operations at Idemia NSS (National Security Solutions), and Matt Gilkeson, chief technology officer/chief data officer of TSA, delved into the current landscape of biometric algorithms.
Bays points out the significant progress made by the industry in recognizing and addressing algorithmic bias, but also underscores the need for continued vigilance. He explains that bias in biometric systems can lead to severe consequences, such as wrongful arrests or denial of services, which highlights the importance of developing and deploying AI with transparency and ethical considerations.
Gilkeson provides insights into the TSA’s approach to data governance and the measures taken to ensure AI systems are both secure and reliable. He emphasizes the role of responsible data collection and the rigorous testing of AI systems to ensure they are free from bias. This includes using diverse datasets for training algorithms and constantly reviewing the systems to avoid any inadvertent discrimination.
“We could talk about some of these other processes that you know have enabled some denial of the services that they’re rightfully entitled to. So these biases have an impact on a person’s life. It’s just not a computer. It’s just not statistical representation of data, what’s going on there, but how it actually translates and translates into our society,” he continues.
The speakers also discuss the importance of transparency in the procurement process for AI applications. They stress the need for government agencies and private sector partners to work together to ensure that the algorithms they deploy are subject to stringent scrutiny and meet the highest standards of fairness and accuracy.
“One of the things here as of late is when the government is procuring AI for biometrics identification, generative AI is the one that catches a lot of the headlines,” says Bays.
“But when we’re talking about having an understanding of the accuracy, the reliability and the validity of AI to match identities in these large systems supporting either a one-to-one verification use case or a one-to-many identification use case, what risks are present as we go through this procurement process?”
Bays adds that the capabilities of generative AI to produce synthesized images are still growing.
“Can somebody take a synthesized face and put it into a biometric system? What is the impact of that? AI has enabled not only the accuracy and efficiency of algorithms to improve, but also presented the opportunity for attacks,” he continues.
“Transparency of what is available, how the algorithm, as presented by a vendor, is measured against these various scenarios and data sets. One of the things you want to see, going back to the math aspect of it, is how close or how separate are the results across these very various demographic groups, and this provides a nice chart that shows that where you can go through and see the performance for the scenario, for the populations, and understand where is that demographic effect and bias, and how can it be measured.”
Furthermore, Gilkeson and Bays highlight the necessity of maintaining human oversight in the application of biometric systems. While AI can enhance the efficiency and accuracy of identity verification processes, the speakers agree that human judgment remains a crucial component in scenarios where the technology could have life-altering implications.
Article Topics
accuracy | algorithms | biometric matching | biometric-bias | biometrics | facial recognition | IDEMIA | Idemia NSS | responsible AI | TSA
Comments