Facial recognition use planned by police and border security as NIST report sparks concern
Australia’s Home Affairs Department says it needs to increase its use of facial biometrics with data from allied nations and security partners due to greater vulnerability to terrorism and organized crime, according to The Australian.
The systems currently in use were not designed to meet current volumes and sophisticated risks, according to a departmental report delivered to the government following the election in mid-2019. Visa applications are expected to rise from 10 million in 2018-2019 to 13 million by 2026, according to the report.
The department wants to collect facial images and fingerprints from people who apply for an Australian visa from 46 different countries. Data would be shared with other “five eyes” partners Canada, New Zealand, Britain, and the U.S.
The Identity Matching Services Bill, which is set to significantly expand the government’s use of facial recognition, was rejected last year, and is being redrafted to better protect citizen’s rights.
Police use of iFace at 85 Victoria stations revealed
Police in Victoria, Australia, have rolled out iFace facial recognition at 85 police stations to identify wanted suspects, the Sydney Morning Herald reports. Images will be compared with the force’s own database of convicted criminals.
The Herald also reports that Victoria police are not sharing their future plans for use of facial recognition, and have not ruled out implementing the biometric technology on a fleet of 50 “eye in the sky” drones. Police declined to share false positive rates with the outlet, but noted that since the system was deployed in 2015, its decisions have always been subject to human override.
“During offender processing at locations where an iFace camera is in use, there are a range of techniques in place to prevent someone being linked to the wrong image,” a police spokesperson told the Herald.
Delhi police scan protesters
Police in Delhi have used an automated facial recognition system (AFRS) to attempt to identify people previously filmed at protests in the crowd at a rally by Prime Minister Narendra Modi, Indian Express reports. India has been rocked by numerous protests against a contentious new citizenship law and National Register of Citizens, and Delhi police has been filming these protests.
The Delhi police acquired the system after the Delhi High Court ruled on a case related to missing children in 2018, the Express reports that it was used only three times for other purposes prior to December 22 at Independence and Republic Day parades.
A database used by police contains some 150,000 people with criminal histories, sources told Express, along with separate database with 2000 images of terror suspects, and now a third of “rabble-rousers and miscreants.”
“Each attendee at the rally was caught on camera at the metal detector gate and live feed from there was matched with the facial dataset within five seconds at the control room set up at the venue,” a police official said, according to the report.
An official Delhi Police spokesperson told Express in an email that the effectiveness of any such system depends on the quality of the related databases, and building them up was part of the initial focus of the agency.
“In the next stage, we focused on law and order also and accordingly expanded the datasets to those with known criminal records of relevant categories and also to law and order suspects, identified through extensive archival videography and behaviour analysis at sensitive public protest venues,” the spokesperson wrote. “We have used these datasets last Sunday based on credible intelligence inputs about possible disruptions.”
Further, the representative says the databases are not perpetual and are subject to revision, do not include any racial or religious profiling as a parameter, and are subject to “best industry standard checks and balances against any potential misuse.”
House Committee Chairman calls for explanation from DHS
The subject of discrepancies in accuracy between different demographics as detailed in NIST’s recent report on bias has prompted U.S. House Homeland Security Committee Chairman Bennie G. Thompson to write Department of Homeland Security Acting Secretary Chad F. Wolf, expressing concern about the likelihood of misidentification by the DHS’ facial recognition systems.
In the letter, Thompson notes that facial biometric algorithms tested by NIST were up to 100 times more likely to misidentify black and Asian people, compared to white people, as well as differences between age groups and genders. Thompson calls the results “shocking,” and suggests they “raise serious questions as to how DHS’s internal reviews could have missed such drastic disparities apparently inherent to these technologies.”
Both the NIST report and MIT researcher Joy Buolamwini, a noted opponent of the technology’s use for some government functions, have noted that demographic disparities on the scale Thompson notes are not inherent to the technology, and not equally present in all algorithms.