Australian age assurance law prompts removal of 4.7M underage accounts

Australian regulators have released initial results from the country’s social media age restriction, showing that major social media companies removed access to about 4.7 million accounts flagged as belonging to children under 16 in the first half of December.
“To put that in perspective, there are 2.5 million young Australians between the ages of eight and 15,” says Julie Inman Grant, Australia’s beleaguered but resolute eSafety Commissioner, in a post on LinkedIn. “This is exactly what we hoped for and expected: early wins through focused deactivations.”
They are not perfect outcomes, but that was never a reasonable ask. Inman Grant acknowledges reports that some under-16s accounts remain active. Full compliance may be a holy grail that can never be achieved, no matter how many biometric age verification tools are deployed. “We’ve been clear all along that absolute perfection is not a realistic goal, but this is an incredibly positive start,” Inman Grant says. “Looking ahead, we expect continuous improvement – in deployment, accuracy and efficacy.”
Nor does eSafety intend to stop enforcing its rules, also noting that platforms are responsible for cracking down on circumvention of safeguards.
Like laws for alcohol, speeding, point is to set the norm
“While some kids may find creative ways to stay on social media, it’s important to remember that just like other safety laws we have in society, success is measured by reduction in harm and in re-setting cultural norms,” Inman Grant says.
“Speed limits for instance are not a failure because some people speed. Most would agree that roads are safer because of them. Over time, compliance increases, norms settle, and the safety benefits grow.”
“And while effective age assurance may take time to bed down, we’ve had incredibly positive initial feedback already from three of the largest age assurance providers who have told us that Australia’s implementation of the social media minimum age has been relatively smooth and this was supported by proactive public education and communication about what to expect in the lead up to 10 December.”
“What ultimately matters is that this new law delays exposure, reduces harm and sets a clear social norm.”
Going to take patience and time: Inman Grant
Inman Grant’s main message in publishing the initial results is that the age check law appears to be working, but that it’s not about instant gratification. “We are still at the very beginning of this journey, and it is evident platforms are taking different approaches based on their individual circumstances, resulting in variations in the data and outcomes currently surfaced,” she says. The changes Australia is seeking aren’t going to be measured in weeks, but rather in generational shifts.
This, Inman Grant says, is “precisely why eSafety is undertaking an independent, longitudinal evaluation to measure these impacts over time.” Youth mental health experts and academics will collaborate on measurements and assessments.
A secondary takeaway is that although eSafety is focused on the largest social platforms, it expects smaller platforms to self-assess, to determine if they meet the criteria set out in the law.
The agency will continue to monitor data, reports and information, “including any indications of large-scale user migration to other platforms.” Should a legitimate alternative to Instagram manifest – an unlikely scenario, given the legislative, economic, social, technical and political factors at play – eSafety will find it, and enforce the law.
Early analysis has identified a spike in downloads of alternative social media platforms, but those “have not necessarily translated into commensurate usage.”
Those looking for more numbers as continued justification for the law will be disappointed to know that eSafey considers the matter – or at least the statistical data – closed.
“To maintain the integrity of its investigations, protect legal privilege and preserve the ability to take appropriate enforcement action where necessary, eSafety will not be publishing specific numbers or detailed information obtained using its information-gathering powers.”
US, UK media respond to Australian news
The controversy around the law has prompted reports from various perspectives.
CNBC reports a “mixed reaction from teens, experts and tech firms.” It highlights begrudging compliance by Meta and Reddit, both of which have pushed back against the law. And it cites a Fox News poll of over 1,000 registered voters in the U.S., showing that 64 percent of respondents favored a social media ban for teens and banning cellphones from K-12 classrooms, suggesting that the tide is turning among Americans.
The BBC looks at the UK angle, as Prime Minister Keir Starmer appears to warm to the idea of a social media age assurance law in the Australian mode. The House of Lords will vote on proposals next week. Politically, the issue appears to be seen as a foregone conclusion: the BBC quotes a senior government source, who says “it’s what the public and parents want.”
Canada’s CBC has a wire story from Thomson Reuters that quotes Australian Prime Minister Anthony Albanese, who is clear in his declaration that the so-called ban is working: “This is a source of Australian pride. This was world-leading legislation, but it is now being followed up around the world.”
Safety codes covering search engines now in effect
The initial kerfuffle has concluded, but the debate will continue to simmer as more regulations roll out in 2026. The first tranche of Age-Restricted Material Codes, which took effect on December 27, 2025, requires online services, including search engines, to protect children from exposure to age-inappropriate content like pornography, high-impact violence and material which promotes self-harm, suicide and disordered eating.
Ahead of the deadline, eSafety has published regulatory guidance on the codes. It notes that they cover app stores, social media services, equipment providers, online pornography services and generative AI services. Inman Grant says it is motivated by data showing that 1 in 3 young people say their first encounter with pornography was before the age of 13, and that it was “frequent, accidental, unavoidable and unwelcome.”
“We know that a high proportion of this accidental exposure happens through search engines as the primary gateway to harmful content, and once a child sees a sexually violent video, for instance maybe of a man aggressively choking a woman during sex, they can’t cognitively process, let alone unsee that content. From 27 December, search engines have an obligation to blur image results of online pornography and extreme violence to protect children from this incidental exposure, much the same way safe search mode already operates on services like Google and Bing when enabled.”
Suicide, self-harm searches trigger automatic redirects to help
Searches related to suicide, self-harm or eating disorders will automatically redirect to mental health support services.
“It gives me some comfort that if there is an Australian child out there thinking about taking their own life, that thanks to these codes vulnerable kids won’t be sent down harmful rabbit holes or to specific information about lethal methods, but will now be directed to professionals who can help and support them,” Inman Grant says. “If this change saves even one life, as far as I’m concerned, I believe it’s worth the minor inconvenience this might cause some Australian adults.”
The commissioner is clear in stating that “what this code won’t do is require Australians to have an account to search the internet, or notify the government you are searching for porn.”
“This is about protecting our kids from accidental exposure to material they will never be able to unsee.”
Article Topics
age verification | Australia | Australia age verification | biometrics | eSafety Commissioner | Online Safety Act (Australia) | social media







Comments