FB pixel

Meta pledges compliance with social media laws while continuing pushback

Hint of desperation from Silicon Valley giant as UK eyes legislation modeled on Australia 
Categories Age Assurance  |  Biometrics News
Meta pledges compliance with social media laws while continuing pushback
 

Meta wants Australia to know that it’s toeing the line. That’s the message the company conveys with a new update addressing its compliance with the country’s controversial social media legislation, which bars kids under 16 from using massive social platforms.

The owner of Facebook and Instagram lays out its case in numbers. It says that, “as of 11 December 2025” – the day after Australia’s law took effect – “we removed access to almost 550,000 accounts belonging to people we understand to be under 16 years-old.” Of those, 330,639 were Instagram accounts, 173,497 were Facebook accounts, and, remarkably, 39,916 accounts belonged to underage users of Threads, Meta’s Twitter knockoff.

“Ongoing compliance with the law will be a multi-layered process that we will continue to refine, though our concerns about determining age online without an industry standard remain,” says the update.

It goes on to note its participation in the OpenAge Initiative, the reusable age check system spearheaded by compliance provider k-ID and based on FIDO passkeys. Meta says it will begin to integrate OpenAge’s AgeKeys into its apps in Australia and other markets in 2026.

Meta leans on contradictory arguments

The pattern emerging from Meta’s camp seems to be to concede with reservations: stay compliant while continuing to battle on the policy front for age checks to be implemented at the app store level, where Google, Apple and other firms would be responsible for age assurance, rather than individual platforms.

While the company opposes social media legislation in the U.S. on First Amendment grounds, in Australia it has based its argument in Australia on volume. “Given U.S. research showed teens use more than 40 apps a week, and many of these may not prioritise safety, adopt new age assurance methods, or be in scope of Australia’s social media ban, we still believe there is a better way forward, which is age verification and parental approval at the app store level.”

This is effectively the same tack Big Porn has taken in its fight against age verification rules: some sites won’t comply, so it’s unfair to make any one site do it; instead, users should have to verify their age before they use any app at all, so that the problem rests in the device, rather than the platform.

The Silicon Valley giant bolsters its logic with a list of concerns it says “have been raised by experts, youth groups, and many parents.” Age checks, it says, prevent vulnerable teens from getting support through online communities. They drive teens to “less regulated apps and parts of the internet.” They don’t always work, especially around threshold ages. And there is “little interest in compliance from many teens and parents.”

The list can be easily dismissed, at least coming from Meta. If it cared about vulnerable teens, it wouldn’t ignore the harms its platforms perpetuate. Unlike the case of porn, there is no credible “alternative app” to replace Facebook or Instagram that is not covered by Australia’s law; scale and reach is what makes social platforms what they are, and it has not proven easy to successfully introduce new formats. The most recent to attain mainstream penetration is TikTok, which turns ten in 2026.

The age assurance industry itself has been among the loudest voices to admit that the tech and its use are a work in progress. And finally, if there is “little interest in compliance” with platform-level age checks from teens and parents, there is nothing to suggest that interest would increase should age checks be moved elsewhere in the tech stack.

Self-interest drives efforts to avoid regulation

Meta wraps up its defense by declaring that “the premise of the law, which prevents under 16-year-olds from holding a social media vaccine so they aren’t exposed to an ‘algorithmic experience’ is false.” It says algorithms are in play even if users aren’t logged in, “albeit in a less personalized way.”

In the end, Mark Zuckerberg’s multibillion dollar company says it is committed to meeting its compliance obligations, but will continue to “call on the Australian government to engage with industry constructively to find a better way forward.”

For Meta, a better way forward ultimately boils down to what works in its best interest. The social media firm accused of promoting harmful content to teens, despite knowing the risks, continues to insist that any regulation of its operation will harm teens. It is a hollow argument.

In light of recent developments on X, Elon Musk’s microblogging site-turned-CSAM generator, Meta may see a grotesque opportunity to contrast itself as the well-behaved platform that cares. But it is difficult to see it conceding to any scenario in which it bears the primary legal responsibility for keeping underage kids off its services.

UK pondering under-16 law; AVPA likes the idea

Too bad for Meta, momentum is not in its favor. In the UK, the furor over X’s chabot Grok mass-producing sexualized images of women and children has given new fuel to calls for a social media law similar to Australia’s, with the UK Conservative Party saying it would follow Australia’s lead.

The government maintains it has no plan for a “blanket ban” on major social platforms for users under 16. But, according to a report from the Guardian, it is “closely monitoring the impact of moves taken to prevent children setting up accounts on Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X, YouTube and Twitch.”

Meanwhile, a major teaching union, NASUWT, has called for the UK to adopt Australian-style legislation, and there is growing support across the UK political spectrum.

The Age Verification Providers Association (AVPA) is in favor of moving the UK in the direction of Australia’s law, and says on LinkedIn that “preparing a formal lessons-learned paper drawing on our members’ experience and expert judgement since the Australian rules came into force on 10 December 2025.”

The post highlights a few major points it intends to make. The first further puts the lie to Meta’s assertions that it is doing all it can to comply. “Claims that age limits are ‘too easy to circumvent’ largely reflect platform implementation choices, not technical limits,” it says. “Social media platforms have opposed age-based access rules and are applying age checks reluctantly.”

“In some cases, they have failed to deploy well-established safeguards such as liveness detection to prevent the use of photos or avatars, or buffer ages where age estimation tools test several years above the legal threshold to reduce false positives.”

Furthermore, AVPA says, VPNs have proven to be a non-issue, with an initial spike in use plateauing not long afterward.

Finally, it’s only been a month, thank you very much, and the government is not finished its phased rollout.

“The Australian regime is still bedding in. The right question is not whether perfection was achieved on day one, but whether the regulatory framework supports iteration, evidence-based adjustment, and meaningful protection for children.”

Australia struggling with Grok deepfake issue

Countries have begun banning X over its child porn problem, with Malaysia and Indonesia announcing bans today. Ofcom has floated the idea, as has Australia – where, according to a media release from the eSafety Commission, the pertinent question appears to be, “how much child exploitation is enough?”

“While the number of reports eSafety has received remains small, eSafety has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery,” it says. “eSafety will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act.”

It may be waiting for the best legal tools. The statement notes that “additional mandatory codes will commence on 9 March 2026, which create new obligations for AI services, among others, to limit children’s access to sexually explicit content, as well as violent material and themes related to self-harm and suicide.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events