Survey shows social media firms ignoring Australia’s minimum age law

More data has been released showing that Silicon Valley’s social media giants have no interest in complying in good faith with Australia’s Social Media Minimum Age (SMMA) requirement. A YouGov survey commissioned by 7News Spotlight found that, among 1500 Australians aged 13 to 15, 85 per cent are still using social media daily.
Fifty two percent say it’s still easy to access platforms. Most, says the report, have gotten around age checks by simply lying about their age. That would be self-declaration, which does not qualify as an effective or reliable method for age assurance.
7News quotes Paul Smith of YouGov, who says “it’s clear that the social media companies have not done anywhere near enough to get young people off social media.”
That’s not to say the legislation hasn’t worked at all. Online bullying has dropped by 9 percent, and exposure to inappropriate and violent content is down by 18 percent. Thirty percent of teens say they’re spending more time on sports and activities, and 27 percent say they’re sleeping better.
Smith says they’re modest improvements, “but it shows that the ban has had a real impact in improving in just six months, the lives of our young people.” One area in which it’s definitely getting through is with parents, two thirds of whom are now monitoring their children’s social media use.
Emma Mason, an online safety campaigner whose daughter died by suicide after being bullied on social media, says the new data shows “there’s significant work to be done, but the work needs to be done by the social media companies who are continuing to allow this to happen.”
‘We’re basically pushers’
The piece also points to newly released internal documents revealed in the U.S. lawsuits against Meta and Google. One internal document declares that “Instagram is an inevitable and unavoidable component of teens lives. Teens can’t switch off from Instagram even if they want to.”
A 2020 exchange has one employee exclaiming, “oh my gosh y’all IG (Instagram) is a drug”.
A colleague responds: “Lol, I mean, all social media. We’re basically pushers.”
Mason says “it’s the disconnect between these documents that show the truth of what’s going on in these companies.”
Tech companies have offered responses to 7News’ findings, which function as a catalog of deflections, evasions and empty assurances.
“Snapchat continues to implement reasonable measures consistent with the social media minimum age law, and we support its goal of improving online safety for young Australians,” says Snap Inc. “Age assurance remains a complex, industry-wide challenge, and we are actively improving our approach as we learn more. Since the legislation’s introduction, we have said there is a more effective way to deliver these protections, such as app store-level age assurance.”
TikTok says “the safety of our community is the highest priority. In Australia, TikTok is a 16+ platform, and we continue to proactively detect and remove suspected underage users. If anyone sees an account they believe shouldn’t be on TikTok, we encourage them to report it in-app.”
Meta claims it has “fundamentally redesigned the experience for teens on our platforms. This includes Teen Accounts, which automatically places young people into the most protective experience, with restricted messaging, sensitive content filtering and overnight notification limits. These protections are on by default, and teens under 16 need a parent’s permission to change them.”
“In Australia, where under-16s are banned from social media, these protections apply to 16- and 17-year-olds, ensuring that young people permitted on our platforms still have built-in safeguards. These default protections work alongside Family Centre, which gives parents additional tools to supervise their teen’s experience, including time limits, content restrictions and messaging oversight.”
“We continue to believe the most effective approach to age assurance is age verification at the app store level giving parents a single, consistent place to manage their children’s access to all apps and services, not just social media platforms.”
Meanwhile, not to worry, the company says: its AI-powered profiling systems will help them do better.
For AVPA, numbers are an insult to provider capabilities
Responding to news of the survey on LinkedIn, the Age Verification Providers’ Association (AVPA) says the data shows exactly what it argues in its recent Lessons Learned report: “the problem in Australia is not too much age assurance, it is too little.”
“Since removing the 4.7 million honest kids who had registered their ages as below 16, it is clear that social media platforms are not making great efforts to detect underage accounts – which, incidentally, they claim their internal ‘heuristics’ tools are capable of doing when they argue against age verification requirements in other jurisdictions.”
AVPA points to KJR research, which found that 9 out of 10 social media platforms in scope do not even check the age of a new user.
“Some suspect the global leadership of these platforms is prepared to pay $49.5 million AUD fines as long as the policy can be portrayed as a failure to other governments.”
Indeed, others have noted that the sheer wealth behind social media companies is a problem for regulatory enforcement that relies on financial penalties.
AVPA takes no formal position on Australia’s policy, “but will defend the capabilities of our sector’s technology to deliver far better results than this.”
“As the Australian Age Assurance Technology Trial demonstrated, well over 90 percent of Aussie <16s should have had their accounts deactivated by now.”
Article Topics
age verification | Australia age verification | AVPA | legislation | regulation | social media







Comments