As Dec 10 deadline approaches, opponents of Australia social media law get louder

With less than two weeks to go before Australia’s social media ban for people under 16 takes effect, Silicon Valley appears to be in a panic.
In what is being framed as a legitimate concern for compliance but looks more like a calculated power move to undermine the law they despise, major social media platforms have made public statements saying they have “adopted government-issued IDs” for age verification.
In comments to SkyNews, Meta says that if someone is dissatisfied with the outcome of age estimation, they can upload an ID to contest the result. “This is important, as we know age estimation technology is not 100 per cent accurate and is even more difficult at the 16 age boundary,” a spokesperson for the company says.
This is not actually news: much of the discussion to come out of the Australian Government’s Age Assurance Technology Trial centered on the importance of offering user choice for age assurance, and concurred that there is no one solution that will work in every scenario for every user.
In comments emailed to Biometric Update, the Age Verification Providers Association (AVPA) notes the variety of options available to users: “Estimation is sufficient for almost all users above 19, but some around that age and below may need an alternative – but this need not be government ID; it could use bank check, checks with school records, or even an email address or mobile number. Interoperability across age assurance providers is now being rolled out so any age check with a method the user prefers on one site can be used across multiple services without repeating the process each time.”
TikTok more or less concurs. “We use a multi-layered approach to age assurance that relies on various technologies and signals to confirm someone’s age,” a spokesperson confirms. “If we wrongly deactivate an account, it is easy for people to submit an appeal which includes methods that don’t rely on a government-issued ID.
According to Sky News, Snapchat also confirmed “shift toward ID-based or government-linked verification solutions.” Yet this contradicts recent news that Snapchat has engaged ConnectID to provide bank-account-based age assurance, which does not require government ID.
The story quotes shadow communications minister Melissa McIntosh, who lies when she says “Meta and probably other platforms will be compelling Australians to use their digital ID if their age verification technology can’t establish an age.”
In a separate interview with 2SM Mornings, McIntosh says she has “been warning for a long time now, despite the great intention of protecting kids, it’s a high risk of failing. And then if that fails, what are we left with? What are we left with? What happens next after this?” She criticizes the eSafety Commissioner for adding a platform (Twitch) to the list of covered services, and suggests that instead the commissioner should be focusing on bigger threats, such as “some of these really serious dark web and other technologies.”
Sowing distrust in age check systems gives Meta excuse to shrug them off later
The language, tone and framing here all suggest that Meta and the politicians that align with its agenda are grasping at straws ahead of December 10 – and maybe even setting up a narrative that will allow them to dismiss the law. If Meta’s stance is, “we offer facial age estimation but it doesn’t work for most people, who will have to upload digital ID,” it’s easy enough for them to accuse Communications Minister Anika Wells of breaking her promise that (as she says in this Facebook video), by law, “social media platforms cannot require you to upload your government ID.”
The language at play in the Sky News article is important: no one will “compel” anyone to upload a government ID. Whether it will be “required” is a finer question: even if there is what could be considered a requirement to prove one’s age with digital ID, it will likely not be done through the social platforms themselves, but third-party age assurance providers that are certified to international standards.
As to the accuracy of these companies’ facial age estimation systems, there is benchmarking that records this in the National Institute of Standards and Technology (NIST)’s Face Analysis Technology Evaluation (FATE) for Age Estimation & Verification. NIST’s biometric testing has found that the majority of facial recognition algorithms are more likely to misidentify people with darker skin, women and the elderly. But the most accurate algorithms show very low differentials in the Institute’s latest testing. Meaning, some systems will be better than others. There has been plenty of documentation to that effect already.
Moreover, since the VPN argument has proven so popular among critics of online safety laws, it should be applied to social media, as well: if an Australian is determined to access their Facebook account without performing age verification, they can always trick the computer into thinking they’re in another country by using a VPN. Surely, that will be a more attractive option for the tech savvy and paranoid social media user than uploading a digital ID against their will.
Finally, it must be noted, as often as possible, that the same social media companies claiming age assurance technology presents a privacy risk are among the largest storehouses of data to ever exist, rely on sales of this data for their revenue, and have recently been pushing their camera-equipped smart glasses on the public, which have been found to harvest user data for AI training purposes.
Privacy commish says people need to know more about law
More weight can be given to comments from Privacy Commissioner Carly Kind, who says a recent breach of Discord’s servers, which affected 68,122 Australians and caused hundreds serious harm, “goes to many of the concerns that Australians already hold about the looming age assurance scheme.”
Speaking with Crikey, Kind says she recognizes that Australians don’t have a high degree of trust in social media companies to begin with. “So asking them to just trust that the companies will do what’s in their best interests is potentially a reach. That’s why regulators exist to ensure that there’s trustworthiness in the system.” She believes more public education on the matter could help assuage some concerns.
Article Topics
age verification | Australia | Australia age verification | AVPA | biometric age estimation | biometrics | children | facial age estimation (FAE) | legislation | Meta | social media | TikTok







Comments