UK knowingly deploys biased facial recognition passport checking system

The UK has admitted to introducing in 2016 a biometric facial recognition passport photo checker, although it was aware the system would have difficulties recognizing passengers with very light or dark skin tones, writes New Scientist.
“User research was carried out with a wide range of ethnic groups and did identify that people with very light or very dark skin found it difficult to provide an acceptable passport photograph,” UK Home Affairs wrote in a document released following a freedom of information (FOI) request. “However; the overall performance was judged sufficient to deploy.”
According to New Scientist, two black people made public the system errors they had encountered.
“A person’s race should not be a barrier to using technology for essential public services,” says a spokesperson for the UK’s Equality and Human Rights Commission. “We are disappointed that the government is proceeding with the implementation of this technology despite evidence that it is more difficult for some people to use it based on the color of their skin.”
Algorithmic bias of facial recognition technology has long been discussed. In July, the UK Information Commissioner’s Office (ICO) chose 10 out of 64 projects, one focused on identifying and mitigating algorithmic bias in machine learning models used for remote biometric identity verification. In July, an artist built for the European Commission’s SHERPA project on ethics in machine learning an intelligent water gun that uses facial recognition to identify targets and show how algorithmic bias can lead to discrimination and unfair treatment.
Although certain facial recognition systems have failed in detecting people with darker skin tones, the failure rates for facial recognition systems can vary widely.
The Home Office claims passengers can simply move forward with passport application and override the selection, but this is not that simple as the system warns users that issues could arise with the application if the photo is not according to requirements.
The Home Office said it would “continue to conduct user research and usability testing with appropriate participants to ensure that users from different ethnicities can follow the photo guidance and provide a photo that passes the photo checks.”
University of Sheffield Computer Scientist Noel Sharkey told New Scientist that the affair demonstrates the need for new regulation of facial recognition.
Earlier this year, a U.S. House subcommittee discussed the use of facial recognition technology, how biased algorithms can lead to discrimination, and the lack of diversity at tech companies. There was also a focus on how facial recognition success rates differ between white men and women of color. IBM has recently launched software to analyze how and why algorithms make decisions, as well as detect bias and recommend changes. For the past year, MIT researchers have also been working on an automated tool to ‘de-bias’ AI training data.
Article Topics
accuracy | algorithms | biometric passport | biometrics | dataset | facial recognition | training | UK
Comments