Aussie privacy watchdog casts evil eye at Clearview, Auror for biometrics collection
The Australian government will not pursue further investigations into Clearview AI, but a statement issued by the Office of the Australian Information Commissioner (OAIC) says a previous determination finding the company in breach of the nation’s privacy laws still stands and calls the company’s harvesting of biometric data “troubling.”
“I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinizing the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States,” says Privacy Commissioner Carly Kind, noting other instances in which Clearview has run afoul of regulatory bodies.
“Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time. However, the practices engaged in by Clearview AI at the time of the determination were troubling and are increasingly common due to the drive towards the development of generative artificial intelligence (AI) models.”
In effect, the statement is not so much a win for Clearview as it is Australia’s regulators saying they have already been there and done that when it comes to censuring the company’s data collection practices.
However, Kind’s statement takes time to throw some extra doubt on Clearview, noting that, while media reporting on the company’s continues collection of Australians’ face biometrics in early 2024 were not based on new information, they nevertheless “gave rise to questions about whether Clearview AI was complying with the terms of the Australian Information Commissioner’s 2021 determination.”
The Commissioner concludes the statement by noting that “the OAIC will soon be issuing guidance for entities seeking to develop and train generative AI models. In the meantime, we reiterate that the determination against Clearview AI still stands.”
As regulators around the world have scrambled to address the use of facial recognition by law enforcement, Clearview is now expected to primarily work with law enforcement in the U.S. The company’s database of facial images for biometric comparison now totals around 50 billion.
Auror crime prevention platform hit with six-month OAIC privacy probe
Australia’s privacy watchdog has also been running a probe on New Zealand crime intelligence platform Auror for six months, according to reporting from MLex. Information Commissioner Kind initiated the investigation into the Auckland-based anti-crime software company on February 2, 2024, in response to concerns about its use by Australia’s federal police force.
Complaints harken back to questions from Greens senator and digital rights advocate David Shoebridge, who in a 2023 parliamentary session asked OAIC officials if they knew Australian Federal Police (AFP) were using Auror’s platform without adequate privacy checks.
A statement from Auror says it enhances privacy, rather than violating it. “We work in partnership with retailers and police to keep frontline workers safe and to deal with the significant challenge of retail crime impacting communities.”
The company bills itself as “a platform for retailers to prevent crime, reduce loss, and make stores safer” by “transforming intel reported by frontline teams into intelligence that removes offender anonymity and enables teams to safely prevent crime.” Some 40 percent of Australian retailers deploy it.
Put simply, the platform collects and uploads video footage for potential sharing with law enforcement agencies in the event of a crime. But that’s not all: a report from Crikey says the firm also collects facial recognition scans, license plate scanners, self-checkout AI data and other information for a thorough security profile.
The outlet previously reported on internal emails from the federal police showing that more than 100 members of its staff had used Auror’s software without any agency guardrails around its use.
Police, however, seem unlikely to be moved by the OAIC’s view without defined legal consequences. In a blog post from July painfully titled “Fighting back with an Auror of confidence,” the New Zealand Police Association extols the benefits of Auror and its ease of use.
“With a simple push of a key, the crime reporting platform Auror allows retailers to report crime and alert fellow retailers to potential criminals nearby, and police investigation teams to simplify the identification and apprehension of thieves causing grief throughout the country.”
One noteworthy section delineates exactly what Auror does: “Auror does not operate cameras. Rather, the process involves its retail customers recording what happens in their own stores, car parks and forecourts and through the platform, electing whether to share their footage with police as and when police need it.”
Regardless of how much choice is built into the system, however, the post also contains a telling statement from officer Richard Bourne, who oversees Auror for the AFP’s Southern District Investigation Support Unit: “We capture the before, the during and the after images of people involved in crime, sometimes before they probably even think about committing the crime!”
How do we investigate thee? Let OAIC count the ways
Those who find themselves in the line of OAIC’s fire should not feel singled out. The agency is currently running probes into retail outlets K-Mart, Bunnings and 7-Eleven; as well as six major investigations underway: two into telco Singtel Optus, as well as probes into health insurer Medibank Private, law firm HWL Ebsworth, U.S. financial-services company American Express and Australian digital-installments and lending business Latitude Financial.
The 7-Eleven probe relates to the convenience store chain’s use of facial recognition and its collection of sensitive biometric information through the creation of “faceprints” – algorithmic representations of individual faces – to weed out fake customer feedback.
The investigation also implicates an unnamed third-party supplier.The Bunnings and Kmart probes also concern the use of facial recognition technology in retail.
Article Topics
Auror | Australia | biometrics | Clearview AI | data privacy | facial recognition | OAIC
Comments