Biometric technology buyers are facing an assessment conundrum: Idemia
The market for biometric identification technology has grown rapidly in the past several years with buyers increasingly forced to choose among a plethora of different options.
To pick the right vendor, biometric technology buyers now have to be aware of the latest regulatory moves but also hot-button issues such as bias, especially in facial recognition which can misidentify members of minority groups. Risks are present whether the buyer is a bank verifying customers, a private company using biometric tech to give office access to employees, or a law enforcement agency looking to speed up suspect identification.
According to surveys on consumer opinions conducted by the FIDO Alliance, more than half of respondents say that using a system that shows noticeable performance differences when it comes to equity can impact an institution or brand, says Teresa Wu, vice president of smart credentials at Idemia North America. Wu spoke during a webinar organized by Idemia Public Security.
“They will definitely trust the brand less or have a lesser opinion of the institution,” says Wu. “Having an algorithm or having a biometric system that’s not performing equitably will impact not just your use cases and your end users, but there’s a clear business impact on the organization.”
To understand computational bias, users can look to standardization bodies such as NIST to evaluate algorithms. But these are only the tip of the iceberg, which sits on top of human and systemic biases which also need to be managed, she adds.
Companies also need to pay attention to keeping up with frameworks and regulations such as the White House executive order on AI issued last year and the White House Office of Management and Budget (OMB) memorandum brought in March.
The memo only applies to organizations that do work for the federal government. However, many of its points can be replicated in other sectors, Wu points out. In the future, government guidelines could become best practice for all agencies or organizations planning to procure AI technology for biometric identifications.
“Oftentimes the federal government kind of takes the lead and then state policy follows it or at least mimics it,” she says.
The government memo emphasizes responsible procurement of AI for biometric identification, such as addressing the risk that the data used to train and operate the AI may not be lawfully collected or sufficiently accurate to support reliable biometric identifications. This includes the risk that the biometric data was not collected with proper consent or that it causes algorithmic bias.
Idemia says it has come up with a framework that helps biometric technology buyers assess which solution fits them and mitigate risks.
Until three years ago, buyers would assess products by speed, accuracy, computation power and cost. Cost would often dictate the criteria for selecting a biometric system.
The company proposes a different framework involving six criteria for procurement, including transparency or whether the independently validated or tested and the information on it is published. Another is experience, or understanding the deployment environment, and security, which includes protecting data and enhancing privacy. Among other criteria are accuracy, ethics and robustness which means that the solution needs to work anywhere for everyone – without bias.
In practice this will mean that during a procurement the vendor will have to go through a questionnaire or checklist, adds Wu. Organizations will increasingly need to prove they procured biometric technology responsibly.
“You need to dare to ask the nontechnical questions beyond just the algorithm,” says Wu.
Article Topics
AI | algorithms | biometric identification | biometric-bias | FIDO Alliance | IDEMIA | Idemia Public Security | Teresa Wu
Comments