FB pixel

Social media giants not properly following Australia age check rules, says eSafety

Compliance update identifies ‘poor practices,’ commissioner threatens consequences
Categories Age Assurance  |  Biometrics News
Social media giants not properly following Australia age check rules, says eSafety
 

As it releases its first compliance update since Australia’s Social Media Minimum Age (SMMA) law took effect, the eSafety Commission is waving a finger at major social media companies for not falling in line with obligations.

A statement from eSafety says it has “significant concerns about the compliance of Facebook, Instagram, Snapchat, TikTok and YouTube,” and is “continuing to gather evidence necessary to inform potential enforcement action.”

Regulatory bodies have been eager to proclaim themselves willing and able to enforce their rules. Yet the titans of Silicon Valley are getting a first warning, with threats of punitive measures if they don’t make improvements.

And then there were 5: Meta, Snap, TikTok, YouTube to face brunt

The compliance update summarizes data collected in the first three months of implementation of the SMMA obligation, focusing on 10 platforms: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube.

As a result of the findings, compliance and enforcement efforts will now focus on just 5: “eSafety is actively investigating potential non-compliance in relation to Facebook, Instagram, Snapchat, TikTok and YouTube. We are aiming to finalize at least some of these investigations and make a decision about any enforcement action by the middle of 2026.”

The statement acknowledges “there has been some progress in the first 3 months, including large scale account removals and more visible underage reporting pathways.” However, it says, “insights from a range of sources including platforms’ responses to legally enforceable information-gathering notices, public reporting and eSafety’s pulse survey, show major gaps remain.” Many users under 16 have retained accounts. Most often, this is because they had not yet been asked by the platform to verify their age.

Among the other “poor practices” eSafety includes are age assurance prompts presented to those who have declared their age to be under 16, “​enabling children aged under 16 to repeatedly attempt the same age assurance method to ultimately obtain a 16+ outcome,” failure to implement reporting protocols, and “insufficient measures to prevent new under 16 accounts being created.”

Referring specifically to biometric facial age estimation (FAE), the report finds it to be an “effective form of age assurance” if used well – “in particular, for confirming that a person is considerably younger or older than a given age threshold.”

“However, facial age estimation is known to have higher error rates for children near the age threshold of 16 years.” As such, platforms that encouraged self-declared 14- and 15-year-olds to undergo age checks and offered facial age estimation to increase their account age would have been aware that many of them would likely receive “a false 16+ outcome.”

Not good enough: ​Inman Grant 

The concerns have prompted eSafety Commissioner Julie Inman Grant to adopt “an enforcement stance.” Still, she does not intend to rush into formal censure without proof.

“Any enforcement action requires sufficient evidence, which takes time to gather,” the commissioner says. “This reform is unwinding 20 years of entrenched social media practices.”

Nonetheless, while “durable, generational change takes time,” Inman Grant says “these platforms have the capability to comply today and we certainly expect companies operating in Australia to comply with our safety laws.”

“They can choose to do so or face escalating consequences, including profound reputational erosion with governments and consumers globally.” They can also face civil penalties of up to 49.5 million Australian dollars (about US$34 million).

The findings gesture at a truth that appears to be emerging from the regulatory thicket: it’s probably not enough to put age restrictions on the large platforms that have come to dominate the internet and online culture. At worst, age assurance laws can affect services that aren’t the primary target, while merely inconveniencing those companies, worth billions of dollars, which present the biggest risk.

Big Tech at the root of the problem 

Dr. Rob Nicholls, a senior research associate at the Faculty of Arts and Social Sciences of the University of Sydney, says that while “Australia has enacted genuinely ambitious legislation, this report shows ambition alone is not enough.”

“The compliance gaps identified are not accidental,” says Nicholls in a release. “The platforms engineered workarounds into their own age assurance systems, failed to close reporting pathways and allowed repeated attempts to game facial recognition. These gaps reflect the rational commercial behaviour of platforms operating under a law that still leaves substantial discretion in their hands.”

Nicholls believes “enforcement action against five major platforms simultaneously will test the regulator’s resources and resolve in equal measure.”

Which raises the question: why is eSafety spearheading a global regulatory push against the world’s most powerful companies? An opinion piece for the Guardian ostensibly argues against the classification of games as social media under online safety laws, but nails both a fundamental problem – and its source.

“It is extremely difficult, perhaps impossible, to cancel out potential online harms without also cancelling out the benefits that these things have brought to young people,” says the author, referring to online gathering spaces. “I suspect that the answer lies in trying to take back the internet from the manipulative, algorithmic, engagement-driven big tech companies that have colonized it, rather than banning young people from engaging with it.”

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events