Australian legislators spar with platforms, each other over age assurance laws

If there’s one thing every platform can agree on when it comes to age assurance, it’s that biometric age verification measures are a good idea – but probably just not for them.
The latest to suggest that maybe they aren’t subject to the law are TikTok and Snapchat. The companies have reportedly made the case to Australia’s eSafety Commissioner that there are potential legal workarounds to Australia’s incoming social media regulations, which will prohibit users under 16 from having accounts.
A report from MLex says Commissioner Julie Inman-Grant and Communications Minister Anika Wells heard from the firms during a parliamentary hearing that they “may be able to offer an experience for under-16 users without being caught by the ban.”
But Inman-Grant also recognizes that “every platform that is caught by the social media ban would like to be named in the exemption,” and says TikTok and Snapchat will have to go into more “fulsome detail” if they want to be considered.
YouTube has already made much fuss over the revocation of its exemption from the law – none of which has moved Inman-Grant to reset her decision – yet. Google maintains that YouTube is “a video-streaming platform that Australians use as a content library and a learning resource,” rather than a social media site. It says the age check law will be “extremely difficult” for Australia to enforce, and there is speculation that Google may be lobbying the Trump administration to step in on YouTube’s behalf.
Dynamic list to monitor platforms as they adapt
Inman-Grant’s political opponents also point out how the regulatory effort could end up like a game of Whac-a-Mole. But the commissioner has factored a dynamic landscape into her plan. She is set to release more information on a so-called “dynamic list” of platforms that will be assessed to determine if they are subject to the age verification law.
The list is not dynamic only in adding platforms, either: Inman-Grant says companies such as Roblox, WhatsApp, Kick and Pinterest change their features all the time, necessitating a state of constant regulatory assessment. She notes that the latest version of OpenAI’s Sora comes with AI-generated social media functionality.
“They never mentioned that this was on the boil. So, I’ve sent them the regulatory guidance and our assessment tool,” she says.
The eSafety Commissioner gives a nod to Meta Platforms, which she says is “further ahead than most peers in deploying age-assurance technology before the ban takes effect.”
Meanwhile, Wells is preparing to launch a national education campaign this week to raise awareness. “It’s called For The Good Of, and it means for the good of our kids,” says Wells in a transcript of an exchange with Member for Menzies Gabriel Ng.
“We’re doing these things, ultimately, for the good of young people in Australia. It will span television, radio, digital. There will be some on billboards near schools around the country. They’ll see it on TV. They’ll see it online. They’ll see it, ironically, on social media, because until the 10th of December, it is legal for kids to be on social media. And if that’s where they are, that’s where we need to talk to them about what this means and why we’re doing it.”
It’s the over-retention, Shoebridge: Corby addresses much ado about data
Australia’s parliament has been among the main staging grounds for debating age assurance policy, and as such it is no stranger to Iain Corby, executive director of the Age Verification Providers Association (AVPA), who presented evidence to the Senate Environment and Communications References Committee in a recent hearing.
In his opening remarks, Corby makes the case for privacy-preserving, independent third-party age assurance checks. He calls for regulations to mandate privacy by design, and to pursue solutions that use double-blind architecture and zero-knowledge proofs (ZKP), to restrict the sharing of data between parties to what is necessary. He argues that laws should require certification against international standards, and advocates for a risk-based “successive validation technique” that begins with low friction processes like facial age estimation (FAE) and levels up as needed.
Questions from the floor interrogate the use of school data to verify age for those without bank accounts, about data retention in the context of Australian privacy law and repeated major data breaches – including the major Discord breach, which gets blamed on the third party age assurance provider.
Corby is quick to correct the error: per Discord’s statements, it was with 5CA and its partner software Zendesk that the breach originated. The age assurance provider, k-ID, did age checks; 5CA handled appeals. “Discord had done a good job on the initial age checks,” Corby says, “but then were bandying around people’s private identity information on a random customer services system, which was a terrible mistake.”
5CA has defended itself in a statement. “Based on interim findings, we can confirm that the incident occurred outside of our systems and that 5CA was not hacked,” it says. “There is no evidence of any impact on other 5CA clients, systems, or data. Our preliminary information suggests the incident may have resulted from human error, the extent of which is still under investigation.”
The data privacy violations are coming from inside the platform
There is, in questioning from Senator David Shoebridge of the Australian Greens, an apparent desire to assign blame to age verification providers. He argues that Australia’s privacy laws aren’t yet ready to accommodate such data collection, in that Australia’s 1988 Privacy Act doesn’t include requirements for the deletion of data. He asks about workarounds, like masks and VPNs.
Corby’s response is to ask for more and better regulation. The age assurance industry is that rare sector that seeks regulation as an anchor for public trust. As such, calling it out for gaps in its effectiveness is another potential amendment to legislation.
The over-retention of data by Zendesk is an issue with implementation, not design or legislation. Age assurance providers are having their feet held to the fire, as they should, as firms that handle sensitive personal information. But the chain of trust must extend throughout the ecosystem, and platforms being weak links should be recognized as such.
A line of questioning by Senator Sarah Hanson Young brings the hearing to the brink of understanding. “The platforms can do this already. They know who, they know whether a child is on a platform. They know whether that child is accessing harmful material or dangerous material and therefore the platform should be held responsible for that. Why? Why are platforms allowing children to access dangerous material now if they know it’s dangerous and they’re kids?”
Shoebridge, however, torpedoes the epiphany. “Well, I think most people would be concerned about having 24-hour permanent surveillance of everything they do, collected in a database, and used by social media platforms,” he says – essentially defining the business model for social media.
Article Topics
age verification | Australia | Australia age verification | AVPA | biometric age estimation | data privacy | double-blind age assurance | facial age estimation (FAE) | Online Safety Act (Australia) | regulation | social media






Comments