FB pixel

Google disappointed over YouTube’s inclusion in Australia social media law

Big Tech continues raging against law but biometrics providers say they’re ready
Categories Age Assurance  |  Biometrics News
Google disappointed over YouTube’s inclusion in Australia social media law
 

Australia’s law limiting social media to teenagers 16 and older has not yet taken effect, but the spin from Big Tech has already started. On its Australia blog, Google has published a post on “what’s changing for under 16s and parents on YouTube in Australia” on December 10, when the ban takes effect. It comes with a big frowny face from the company, which has fought tooth and nail for an exemption for its YouTube platform.

“This is a disappointing update to share,” says the post. “We deeply care about the safety of kids and teens on our platform; it’s why we spent more than a decade building robust protections and parental controls that families rely on for a safer YouTube experience. But as we have consistently said, this rushed regulation misunderstands our platform and the way young Australians use it.”

The law to protect kids, Google claims, will actually make kids less safe on YouTube, by cancelling out existing safety features. “As the Social Media Minimum Age Act requires kids to use YouTube without an account, it removes the very parental controls and safety filters built to protect them.”

YouTube has valid reasons to be upset. The massive video sharing platform that likes to think of itself as the successor to TV was initially not implicated in the law, on the grounds that it provides educational content and functions beyond what a social platform can do. However, that carve out was rescinded, prompting a furious response from YouTube and its army of creators, who say the move will choke youth creativity and rob kids of valuable programming.

What does Australia’s law actually mean for YouTube users?

After December 10, anyone who wants to sign into YouTube has to be 16 or older to do so. Anyone whose account shows them to be under 16 will be automatically signed out. Google says this is a problem because it cuts off access to features that only work when signed in, “including subscriptions, playlists and likes, and default wellbeing settings like ‘Take a Break’ and Bedtime Reminders.”

The big headline, often buried in the debate, is that “viewers can continue to watch YouTube while signed out.”

It seems likely that Google is cognizant that many kids will be able to keep watching Ms Rachel without problems on December 10, and decided to focus its PR efforts on safety features that will be affected, rather than on the argument that the law will stop kids from accessing beneficial content.

“Parents will lose the ability to supervise their teen or tween’s account on YouTube, as these accounts only work when they are signed in,” says Google. “That means parents will no longer be able to use any controls they have set up, such as choosing an appropriate content setting or blocking specific channels.”

Again, however, there is an important caveat: “YouTube Kids is not affected.”

Another pillar of Google’s argument is what the law will mean for young creators. “Creators under 16 will no longer be able to sign in to YouTube, upload videos or manage their channels, and their channels will no longer be viewable.” That message may end up resonating more loudly than others; one can imagine, for example, a fifteen-year-old wanting to host a cooking channel, and frustrated parents asking why they can’t.

A pertinent question that often gets ignored is, is it really so bad to make them wait a year to broadcast their faces to everyone, everywhere? As discussions about deepfakes and likeness protection grow in urgency, there may be additional reasons for keeping kids’ images off the internet. Some would point to young creators with large followings and associated revenue – to which the response should probably be, should kids be dependent on Big Tech’s platforms to make a living? Should anyone?

YouTube wants TV’s reach but not its restrictions

Google’s baseline argument is clear. The law “fundamentally misunderstands why teens come to YouTube in the first place,” Google says. “YouTube is a video streaming service where they come to watch and learn – everything from ‘how to tie a tie’ videos, to famous speeches, to newsmaking podcasts, to live concerts, to epic sports highlights. And increasingly, kids, teens and families are watching YouTube on television screens in their living room.”

The first part of Google’s statement is true: YouTube is a very useful learning tool. The second, however, points back at why the company’s defense feels false: if YouTube is the new TV, presumably that means being governed by the same regulations that apply to TV and film. It also paints a rosy, idealized picture that could easily be countered with an image of a teenager hunched over their laptop at midnight, mainlining extreme political content, conspiracy theories or Power Slap contests while munching on a bowl of ketamine.

In response to Google’s “I’m not mad, just disappointed” note, Australian Communications Minister Anika Wells suggests that “if YouTube is reminding us all that it is not safe and there’s content not appropriate for age-restricted users on their website” in a logged-out state, “that’s a problem that YouTube needs to fix.”

She says it’s “outright weird that YouTube is always at pains to remind us all how unsafe their platform is” when users aren’t logged in.

Claiming to protect creators, YouTube uses their biometrics to train AI

It seems weird because the problem is to be found in the basic premise. YouTube can say all it wants how it believes in “protecting kids in the digital world, not from the digital world,” or how it’s “invested for more than a decade in consultation with child development experts to build age-appropriate products for our youngest users.” But the larger trajectory of Google, and all of its Silicon Valley counterparts, tells a different story.

At present, these companies are hell-bent on cashing in on the AI buzz, using the tech wherever and whenever they can. This week, it came to light that YouTube’s privacy policy is tied to Google’s, which allows it to train its AI on facial biometrics that creators have uploaded as a likeness protection feature. Moreover, there have been ample concerns voiced over the invasive potential of YouTube’s algorithmic age inference system. It’s time to stop believing that Big Tech cares a whit about its users’ well-being.

Australian porn site legislation coming in March

Politicians in Australia’s opposition parties say they have lost confidence that Australia’s law will work; ABC News quotes Opposition Leader Sussan Ley, who says it has been “botched.”

Nonetheless, the communications minister and her colleagues at the eSafety Commission appear to see the social media law as the kickoff to a wider legislative effort around online safety. InnovationAus quotes Wells’ recent address to the National Press Club, during which she outlined a plan for 2026 that involves prioritizing “plans for a digital duty of care, industry codes for search engines under the Online Safety Act, restrictions for AI bots and ‘nudify’ apps, gambling safety laws and updates to the News Bargaining Incentive (NBI), which aims to ensure large digital platforms contribute to a sustainable news and journalism sector in Australia.

“The wheels of government don’t turn as quickly as we’d like, but we have been making big gains in this space,” she says. “A digital duty of care will create a proactive system and put the responsibility on these services to prevent harm from occurring.”

“We will be working on that over the next year and that will be about the broader question of content and what platforms owe us when they offer these [online services] and then sell our data for advertising,” she said.

The Age Verification Providers Association has taken notice. In a post on LinkedIn, it points to Australia’s incoming rules for adult content sites – following similar efforts in the UK, EU and U.S. to mandate age assurance for access to online pornography. Other industry-designed codes covering extreme violence or self-harm content will also take effect in 2026. The eSafety Commissioner says the rollout will begin on December 27 with codes covering search engines, server hosts and internet service providers; codes covering websites, storage services, AI chatbots, app stores and equipment providers will follow in March.

AI gets soft regulatory approach for now but duty of care is brewing

One notable area in which the government is more cautious on the regulatory side is AI. Wells says the new AI Safety Institute will work with the eSafety Commissioner on potential online harms like AI bots and ‘nudify’ apps, but emphasized that restrictions would be “targeted and measured” so as not to stifle innovation or investment. In this, she falls in line with governments worldwide, who are caught between the increasing calls for oversight and an industry that has cornered large swaths of the global economy.

Nonetheless, it is likely only a matter of time before LLM chatbots and other products of the generative AI boom face stronger regulatory measures, catching up with legislative efforts on social media.

The bigger push toward a digital duty of care will ultimately have the biggest implications for digital culture as a whole. Australia’s social media law covers ten platforms – Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X, and YouTube. Collectively, they host billions of users around the world, and have operated for years as digital mavericks whose products are too transformative and necessary to worry about online safety. Fines attached to the law will sting, but do little to destabilize a company like Meta.

Imposing a duty of care would be a strong indicator that the world has come to generally accept as a truth that social media isn’t good for you, and that the companies running it aren’t working in the public interest. Annika Wells suggests society’s relationship with social media is headed in the direction of smoking, lead paint and cocaine in cola. “I genuinely believe that a few years from now, eyebrows will be raised not at this reform, but why it took so long for it to be implemented.”

Yoti says UK success proves age checks at scale are possible

Some, of course, have known for a long time. In a post explaining the Australian law, digital ID and age assurance provider Yoti aims to calm fears that the law could break the internet. “We can say with confidence that the infrastructure is ready, our tech works at scale and there’s no need for panic,” the firm says.

“When the UK’s Online Safety Act went live on 25th July, traffic surged to 40-160 requests per second that first weekend, almost entirely from UK users. The systems didn’t crack. Instead, they scaled smoothly. France followed by enforcing age checks for access to adult content in 2025. And most importantly, people adopted privacy-preserving ways to prove their age.”

Yoti says the lessons for Australia are that users want easy, reusable methods for proving their age, and that interest in digital ID is likely to grow as a result of the law.

Andy Lulham is COO at Verifymy, which specializes in email-based age checks. He says that while some will clearly feel short-changed by the ban, “when it comes to implementation and kids demonstrating their age on platforms, the process should hold no fears. Rather than relying simply on ID docs, advanced technology like email-based age checks can estimate the vast majority of 16+ users and deliver a result in seconds while preserving privacy. Facial age estimation will be another popular option.”

“As the digital world evolves, there is no silver bullet for online harms. With other nations set to follow in Australia’s footsteps, a collaborative approach between families, schools, regulators, platforms and children themselves will ensure these laws work and young people have access to age-appropriate experiences online.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

ICE using data and probability to decide where to detain and arrest people

U.S. Immigration and Customs Enforcement’s Enhanced Leads Identification & Targeting for Enforcement (ELITE) tool is being used to identify “targets”…

 

In AI era, identity is about governance, Microblink’s Hartley Thompson tells BU Podcast

“One of the defining things in my life is change,” says Hartley Thompson of Microblink. “How do you react to…

 

CLR Labs wins funding to support biometrics, IAD, digital wallet standardization

Cabinet Louis Reynaud (CLR Labs) has won funding from a French government program to support its standardization efforts in biometrics,…

 

Checkr crossed $800M gross in 2025 as biometric background checks expand

Biometric background check provider Checkr is celebrating 2025 as its most successful year ever, with gross revenue surpassing $800 million…

 

Identity and risk infrastructure startup secures $12M for Europe, LATAM expansion

Monnai, which provides identity and risk data infrastructure, has announced a 12 million dollar equity funding round led by Motive…

 

Hopae appoints Sarah Clark to lead US expansion of digital ID verification platform

Sarah Clark is Hopae’s new CPO and GM for North America, joining the Seoul-headquartered company to help extend the reach…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events