Researchers urge facial recognition restrictions, broad consideration by Canadian lawmakers
Several researchers told Canadian lawmakers that the risks of facial recognition use for individuals and society are significant, and should be considered in terms as broad as possible in a hearing pointed out by Freedom to Tinker. Witnesses repeatedly argued that not enough is known about the harms of facial recognition, how it works, and how it fails.
The hearing was part of the Canadian Government’s Parliamentary Standing Committee on Access to Information, Privacy and Ethics examination of the use and impact of facial recognition technology.
Committee Chair MP Pat Kelly called on academics and technical experts from Canada and elsewhere in the early-April session, including Futurist Sanjay Khanna.
From the perspective of cognitive science, the committee heard, recognition of familiar faces is relatively easy for people. With unfamiliar faces, the task is much more difficult.
“Humans are surprisingly bad at identifying unfamiliar faces,” says Dr. Robert Jenkins of the University of York in the UK, even under excellent conditions and among trained professionals. The technology must be examined in the context of unfamiliar faces, he argues. With people in the loop making final decisions about identification, human error is “baked into the system,” in addition to the imperfection of algorithms.
Within a legal context, transparency about limitations is therefore very important.
Presenters also highlighted capacity of individuals to understand and control the use of facial recognition on them and of legislators to anticipate future policy implications, the origin and quality of biometric training datasets, the 2018 Gender Shades study into demographic bias in AI facial analysis systems, and how the technology is integrated into socio-technical contexts. More recent studies on the topic were not mentioned, despite extensive discussion.
Sentiment analysis was raised within the first ten minutes, followed by other technologies that could cause harm if used in conjunction with facial recognition.
Princeton PhD researcher Angelina Wang says “science does not yet know how to address” facial recognitions’ problems with “brutalness and interpretability.” Brutalness is “known ways that these facial recognition models can break down, and allow bad actors to circumvent and trick the model,” such as adversarial attacks, and is related to interpretability.
Part of the problem with a lack of interpretability, according to Wang, is the use of “spurious correlations” to perform matches.
CIPT Post-Doctoral researcher Elizabeth Anne Watkins spoke specifically about facial verification, which is far more common in Canada. Facial verification is increasingly deployed in workplaces, causing harms, according to Watkins.
Watkins published a paper in 2021 on ‘The tension between information justice and security: Perceptions of facial recognition targeting.’
Those harms include worries about the storage of their personal data, inconvenience from repeated verifications and false non-matches, and finding adequate conditions to perform a biometric match.
A moratorium is necessary, and in the meantime regulatory requirements for algorithmic impact assessments and proof of fraud reduction should be put in place, according to Watkins.
Legislators inquired about the relative error rates of human operators and biometric algorithms, for face or fingerprints, the types of errors that people and algorithms make, the limitations of camera technology, and frameworks governments can draw on to form policy.
There were several moments during testimony that seemed to indicate uncertainty about how facial recognition is used in Canadian law enforcement, and witnesses largely admitted they are not aware of actions being taken to establish guard-rails on the use of facial recognition in other jurisdictions.
Further recommendations from witnesses include establishing an expert workforce on facial recognition, and following the model of Illinois’ Biometric Information Privacy Act in putting the onus on businesses to collect consent, rather than individuals to opt out of the use of their biometrics.
The discussion digressed into predictive algorithms several times, but presenters repeatedly emphasized the need to take a holistic view of the issues related to facial recognition, and the context around its use.
In a related plea, Privacy Commissioner of Canada Daniel Therrien urges lawmakers to pass new legislation, and establish a fundamental legal right to privacy, writing in Magazine OC.