Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The Office of the Australian Information Commissioner (OAIC) has launched a new digital ID regulatory strategy mapping out how it intends to encourage people and businesses to “shift to safer and more protective means of ID verification, and ensure privacy is respected in Australia’s Digital ID System and the broader economy.”
A post from Annan Boag, general manager of regulatory intelligence and strategy, attempts to capture the general public vexation over digital identity verification.
“Too often, verifying your identity means sharing ID documents,” Boag writes. “I don’t think I’m alone in feeling a twinge of concern when hitting send on an email with a photo of my passport or driver’s licence, or when handing them over to be scanned when I enter a venue. Where’s the document going? What happens if it gets into the wrong hands?”
Boag follows this with a key observation: “organizations collecting this information are often just as uncomfortable about holding ID documents as most people are about sharing them.” The fear that corporations are harvesting and storing vast amounts of data overlooks that most of them don’t want the risk associated with storing personal information.
“When things go wrong,” Boag says, “customer personal information like ID documents are the costliest kinds of records for a business to have compromised in a data breach.”
Privacy regulator to educate, monitor, enforce deter and collaborate
The desired outcomes of the OAIC’s regulatory strategy are broad, focusing on issues like education and trust. In the main, the government wants Australians to be smart about digital ID, to be able to recognize unsafe IDV practices, and to feel they can trust the country’s digital identity system.
To that end, the OAIC says it plans to perform five key functions. It will provide education to Australians and businesses and encourage them to switch to more secure means of identity verification that comply with the law. It will monitor for non-compliance and alert enforcement units when there is need for an investigation. It will enforce privacy safeguards, “including resolution of possible breaches through investigation, litigation and other formal enforcement outcomes.” It will ensure visibility of regulatory actions and compliance outcomes to deter violations. And it will collaborate on strong relationships with the regulated community, industry and government.
Biometrics, identity verification are ‘regulatory focus areas’
The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.”
Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards.
Of note is the statement that the OAIC will focus “proactive regulatory efforts” on biometric information; it gives as example activities the prioritizing of complaints regarding biometric information, and analysis of and reporting on systemic trends to inform compliance and enforcement approaches.
Also on the list of regulatory focus areas are identity verification by unaccredited ID services, data retention and the matter of express consent.
Commish reflects on Bunnings facial recognition decision
Kind says the watchdog has its eye on deployments of facial recognition in the retail and hospitality sectors, and will address community privacy concerns about rental apps in the real estate sector and connected cars.
AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator.
In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.
Kind says she based her Bunnings decision (which the retailer is appealing) on what she deems to be shortcomings on those concepts. But she also takes the wider societal view into consideration: “Our research told us that more than a quarter of Australians feel that facial recognition technology is one of the biggest privacy risks faced today, and only 3 percent of Australians think it’s fair and reasonable for retailers to require their biometric information when accessing their services.”
As such, “thinking about what the law permits, but also what the community would expect” is critical.
Changing privacy laws to reflect current concerns is not a fast process, but Kind suggests some changes can be made to policy before the next “tranche” of changes to Australian law passes. “I think it’s sufficiently important and urgent that we don’t wait for the legislative reform at this stage,” she says, “and see what we can do via application in determinations and enforcement proceedings.”
Ultimately, changes in the law regarding the right to erasure, fair and reasonable testing, direct right of action and removal of small-business exemptions could address key gaps in the 1988 Privacy Act.
Article Topics
Australia | biometrics | data privacy | digital identity | identity verification | OAIC | Office of the Information Commissioner (OAIC) | regulation
Comments