FB pixel

No carveout for YouTube in age check law, advises eSafety Australia

Exemption to age assurance rule challenged but platform snaps back
Categories Age Assurance  |  Biometrics News
No carveout for YouTube in age check law, advises eSafety Australia
 

Is YouTube social? Is it safe? These are the questions currently hounding Australia’s draft safety codes for protecting children online. In particular, the social media minimum age (SMMA) obligation restricting kids under 16 from creating social media accounts, and its attendant rules on age verification and age estimation.

The eight codes have seen back and forth between government and industry, since the eSafety Commissioner prompted representatives from the sector to create codes in alignment with an accompanying position paper outlining expectations. The codes were delivered and eSafety provided feedback on the initial drafts. These included a “carve out” for YouTube, exempting it from the SMMA obligation and age assurance laws.

Now, Commissioner Julie Inman Grant has published the office’s advice to the Minister of Communications, delivered at the Ministry’s request – and YouTube is the headlining issue.

The advice notes “mounting evidence to suggest certain design choices, features and functionality may contribute to or amplify the risk of unwanted and excessive use, and the risk of encountering harmful content or experiences (including enabling highly idealised and edited content as well as other forms of high-risk content or activity).”

“To protect children from the risk of these harms, the Rules should account for these choices, features and functionality. Currently, the Rules seek to do this by reference to a service’s purpose, likely based on the premise that services with listed purposes (such as messaging or gaming) are less likely to have some of the features and functionality which have been associated with harm on social media.”

The broad argument is that YouTube’s value as an educational tool outweighs the risks it presents to children. Not so, says eSafety: “YouTube currently employs persuasive design features and functionality that may be associated with harms to health, including those which may contribute to unwanted or excessive use (such as infinite scroll, auto-play, qualitative social metrics, and tailored and algorithmically recommended content feeds).”

In an address to the National Press Club, in which she outlines her advice, she says “YouTube was the most frequently cited platform in our research with almost four in 10 children reporting exposure to harmful content there.” As well, “YouTube surreptitiously rolled back its content moderation processes to keep more harmful content on its platform even when the content violates the company’s own policies.”

Besides, the SMMA obligation applies to Instagram and TikTok, both of which host short form video content. Why not YouTube? Or, more precisely, why single out YouTube by name? “Naming specific services (e.g. YouTube) in the Rules risks creating inconsistencies with the SMMA obligation’s intention to reduce harm to children,” the commissioner writes. “In general, I caution against excluding particular services without conditions.”

YouTube wants Inman Grant to stick to original plan

YouTube is not pleased about Grant’s advice, and has effectively accused her of flip-flopping on the age check issue. The Guardian reports on comments from YouTube’s public policy and government relations manager, Rachel Lord, who says the eSafety Commissioner’s position “represents inconsistent and contradictory advice, having previously flagged concerns the ban ‘may limit young people’s access to critical support.’”

The advice “goes against the government’s own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate,” Lord says.

The service – statistically, the most popular social network – had been given the same status and carve-out as Google Classroom and online services like ReachOut and Kids Helpline. Its position is that it is primarily a video distribution platform, and does not prioritize social interactions. It also points to its previous efforts to moderate the content on its platforms, develop age-appropriate products and pursue age assurance solutions.

Commissioner handed ‘proverbial sandwich’ of regulation

The Guardian references “several social media platforms” that have “privately expressed concern about a lack of information about their obligations under the laws,” and raised doubts they’ll be able to implement assurance verification or age estimation systems before the deadline.

Other commentators are publicly calling for the eSafety Commissioner to put the brakes on the legislation. An editorial in ABC news lays out the case, saying that the government has handed Inman Grant “a proverbial sandwich in the form of a huge, unwieldy scheme that affects some of the community’s most vulnerable, governs an ever-changing online landscape and requires facing off against some of the most powerful companies in the world.”

Unanswered questions remain, says the piece. “With so many key factors seemingly up in the air, there are questions to be asked about why the deadline for the new rules to be implemented hasn’t been shifted.”

Social media not as dangerous as sharks, bullets, missiles

The position, however, unintentionally highlights a foundational problem with the debate over the so-called “social media ban” (which Inman Grant calls a “delay”).

Arguments that say the law should be delayed hinge on the idea that imperfect regulations should not be formalized – with the implication being that those imperfections are liable to lead to catastrophe.

Yet the stakes (whether or not kids can log on to social media) are high on a societal level, but not immediately urgent. Should the rule go into effect but with evolving age verification technology that misses a few kids, the world will not explode. Likewise, if a young person in need of resources is unable to access YouTube and that becomes an existential crisis, surely the failure is not in social media legislation, but education, social support and our relationship to the internet.

The crisis that has developed around social media has come on slowly, over two decades. Basing policy decisions on false absolutes is only likely to exacerbate the problem. When Inman Grant says the government is “building the plane a little bit as we’re flying it,” she verges on making the same mistake – a social media account is not a plane crash – but her point is that the regulations must evolve in tandem with learnings, in something like an agile model. Social media changes rapidly; laws must be adaptable.

And the learnings are ongoing. Those who worry that it’s simply not possible for platforms to implement age verification or age estimation technology within six months clearly missed the recently published findings from the Age Assurance Technology Trial, which confidently determined that “age assurance can be done in Australia and can be private, robust and effective.”

AVPA honest about age estimation capabilities: no ‘silver bullet’

And yet even those who know the technology best are open about its limitations. On LinkedIn, the Age Verification Providers Association (AVPA) has published a reflection on the trials’ findings, and the public response – notably a critique from ABC News that “age-checking technology behind the teen social media ban could only guess people’s ages within an 18-month range in 85 percent of cases”.

AVPA says that’s not news to anyone who knows age estimation technology – a less rigorous age assurance method for lower-risk use cases.

“If anyone tries to claim they have devised an algorithm that can accurately assess your age within a day, a week or even a month of your real age based solely on a few selfies, they are not being truthful,” AVPA writes. “Perhaps as computing power continues to improve, and, crucially, with the addition of a lot more data on which to train the models, the best in class might start hitting averages within 3-6 months, but for the foreseeable future, when estimation is used as part of a regulatory regime, it has to be adopted pragmatically, taking into account its limitations.”

The association jokes that “if you want to avoid riots in playgrounds across Australia, using estimation to determine if you can or cannot open a social media account on your 16th birthday is not going to be ideal, because some would be ‘lucky’ enough to pass the test early, and others may not pass for several months after they turn 16.”

But there will not be riots. AVPA is more accurate in illustrating the issue as a playground scrap: hardly a question of life or death.

AVPA also makes the good point that all of this is extremely new. “Australia is likely to be the first jurisdiction in the world to attempt to enforce age restrictions below 18. How this is done is not going to rely on a single ‘silver-bullet’ answer.”

“We would expect regulations to be technically neutral – and instead focus on the required outcome of any combination of age assurance methods. They may indicate, as Ofcom did in the UK, examples of methods they believe capable of achieving the policy objectives, but should then leave it to the platforms themselves to demonstrate, ideally through independent audit and certification of their overall system, that they are delivering to the required standard, while maintaining inclusivity and accessibility.”

Market will become more clearly defined over time

This points to yet another big truth that’s often glossed over: not every age assurance method has to be successful enough to be deployed in practice. And not all will. While its current state resembles a Wild West, with a host of players jockeying for status, it is likely that a select few products will come to dominate in the Australian market. Which ones will likely be determined as much by regulatory agility as by the number of false positives in any given test. Inman Grant points again to YouTube as an example of a platform that has rolled back content safeguards: “this really underscores the challenge of evaluating a platform’s relative safety at a single point in time.”

Regardless, the issue calls for a more nuanced view than simply, “this works or it doesn’t.” Gun control does not stop everyone from acquiring guns illegally – but it’s safer than having no gun laws at all. In the case of social media, where the harms are cumulative rather than instantly fatal, there is room and time to work toward the best solution, and the will to do so: Australia’s trial “found a vibrant, creative and innovative age assurance service sector with both technologically advanced and deployed solutions and a pipeline of new technologies.”

In the meantime, the SMMA obligation will go into effect as of December 2025. Age verification measures will be deployed. Online harms will not cease. LGBTQ kids will not expire en masse from being starved of resources. The world will not end – just keep changing.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Who holds the keys to digital sovereignty? It might not be who you think

As governments think more about digital identity as a pillar of digital public infrastructure, and therefore a matter of vital…

 

Nigeria wades into social media age assurance debate with pubic survey

A survey has been released by the Nigerian Data Protection Commission to gather feedback on the proposed regulation of a…

 

Spain’s Digital Transformation Ministry backs Sybol with €500k

A Spanish digital transformation agency is helping to fund digital identity development and verifiable credentials. The Spanish Society for Technological…

 

Ethiopia’s digital ID joins sovereign wealth fund as weekly enrollments reach 1M

Ethiopia is accelerating its efforts to reach 90 million digital ID enrollments this year, with the National ID Program (NIDP)…

 

Vendors push deeper into high assurance identity verification

Digital identity vendors are accelerating product integrations as businesses look for stronger, more seamless ways to verify users across sectors….

 

Socure unveils Socure Launch for enterprise‑grade identity aimed at startups

Socure has introduced Socure Launch, a new offering that gives organizations instant access to pre‑built identity and fraud solutions. The…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events