Facial recognition policy for New Zealand police outlines acceptable use
New Zealand police have published a policy on facial recognition technology (FRT), which lays out how and when authorities can use the biometric tool in investigations.
“This policy ensures that appropriate safeguards are in place for Police’s use of FRT and the storage of personal information, and that use of FRT is lawful, proportionate and appropriate in a New Zealand policing context,” the document says. It follows with a list of what can be considered “lawful policing functions” that warrant the use of facial recognition, and the conditions for said use.
FRT can only be used with lawfully obtained source and reference images and data. Use must have “regard to human rights and privacy interests.” It must be “approved, controlled, monitored, and governed,” operated according to standard procedure by officers who have received training on the use of facial recognition and biometrics. It must demonstrate “sufficiently high accuracy” and cannot show “an unacceptable level of bias or discrimination.”
In effect, the policy bans the use of real time facial recognition on live video feeds. While the policy notes that live FRT can be useful in certain policing scenarios, it concludes that “in the New Zealand context it is considered that the overall risks of live FRT outweigh the potential benefits. It follows that Police will not make decisions regarding the implementation of live FRT until the impacts from security, privacy, legal and ethical perspectives are fully understood, and it has engaged with communities and understood their views.”
For now, then, facial recognition will only be used on “historical data,” after an appropriate time delay. Police are not to use facial recognition capabilities of third-party systems. But they can, in select instances, share data with government agencies.
Self-governed audit log intended to track police FRT use
To ensure the guidelines are being followed, the policy also requires standardized reporting in the form of an audit log “to ensure that requests and searches are recorded and reviewable if necessary.” It allows a rather large loophole, however, in stating that “it is recognised that some of the more basic systems offering FRT capability may not provide this functionality, in which case appropriate records about access and use to these systems must be kept.”
Much of policy comes down to semantics, and as long as there is ambiguity as to what constitutes “appropriate records,” standardization will be practically impossible. Besides which, argues a piece from Radio New Zealand (RNZ), unlike the EU’s precedent-setting AI Act, the policy fails to put in place any external oversight on FRT use: “here, the police audit themselves.”
That scenario is likely to be unpopular with New Zealand’s Privacy Commissioner, who has promised to publish a draft biometrics code this autumn outlining tougher regulations on facial recognition and other biometric tools.
As recently as 2021, New Zealand police signed a “self-regulation pledge” on facial recognition based on recommendations from Dr. Andrew Chen – then a research fellow at University of Auckland. In a webinar recorded in July, Chen, who now serves as chief advisor for Technology Assurance with the New Zealand police, addresses some of the public concerns around facial recognition – the roots of which were partially sown with an ill-advised FRT trial NZ police conducted in 2020 using technology from Clearview AI.
Chen’s department was formed in the wake of that, to weigh considerations on privacy, security, legality and ethics against procurement processes focused on the financial angle. Its work has been leading up to the publication of the policy, and Chen says there’s a lot in it “to try and make sure that any use of facial recognition is safe.”
Police agencies using facial recognition will get annual check-ins
Only two police agencies will control FRT: forensic services and the high tech crime group. Both, says Chen, “sit inside the national criminal investigations group, so there’s really only one owner that will be authorized to use facial recognition at this point in time.” All data that is collected is destroyed according to clear policies.
To address concerns about bias, Chen points to recent data from the U.S. National Institute of Standards and Technology (NIST) showing that commercial FRT models “don’t demonstrate any meaningful discrimination or bias between ethnicities or genders.” Basing an argument against FRT on these grounds, he says, isn’t likely to convince those who have seen the numbers.
Nonetheless, “there’s broad recognition that it is only an investigative tool and right now is not going to be reliable enough for evidence,” says Chen, noting that it is unlikely to see FRT outputs in a courtroom any time soon. “It will be something that informs a human investigator in the process of doing their work.”
“We all know that this is a controversial area,” says Chen. “And the groups that are using it, we’ll be checking up on them at least annually if not more frequently than that.” But, he says, “I think that the benefits will probably become less hypothetical as we get more success stories.”
Article Topics
biometric identification | biometrics | criminal ID | facial recognition | New Zealand | police | real-time biometrics | regulation
Comments