Governments want fast action on social media age checks, but compliance lags

Meta has run afoul of EU online safety laws. The European Commission has announced that it has found Instagram and Facebook, both owned by Meta, in breach of the Digital Services Act (DSA) for “failing to diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.”
This is not a revelation, given Meta’s recent track record on privacy, user safety and biometrics. It currently includes court decisions in the U.S. which found Meta and YouTube liable for negligence in their platform design and inadequate in their efforts to protect kids from predators; furor over its plan to release facial recognition-enabled smart glasses; and introducing a new, mandatory employee monitoring tool that tracks keystrokes and mouse movements of company workers to harvest data for training AI models.
Moreover, data from Australia shows that 9 in 10 of the largest social media platforms are “still not routinely confirming self-declared ages for new accounts.”
European Commission to Meta: do better on your own policies
Now, the European Commission says Meta’s measures to enforce the minimum age of 13 “do not adequately prevent minors under the age of 13 from accessing their services nor promptly identify and remove them, if they already gained access.” Meaning users can still simply enter a false birth date to gain access.
Per a release, “DSA guidelines identify age estimation and age verification as an appropriate and proportionate way of ensuring a high level of privacy, safety and security for minors. To be effective, all age-assurance technologies must be accurate, reliable, robust, non-intrusive, and non-discriminatory.” In the Commission’s eyes, Meta has failed to meet the standard.
It also criticizes Meta’s reporting system, calling it “difficult to use and not effective,” and notes that the violations build on “an incomplete and arbitrary risk assessment.”
“Meta’s assessment contradicts large bodies of evidence from all over the European Union indicating that roughly 10-12 percent of children under 13 are accessing Instagram and/or Facebook. Moreover, Meta seems to have disregarded readily available scientific evidence indicating that younger children are more vulnerable to potential harms caused by services like Facebook and Instagram,” the Commission says.
Henna Virkkunen, executive vice-president for tech sovereignty, security and democracy, points out that Meta’s own conditions acknowledge that their services are not intended for users under 13. “Yet, our preliminary findings show that Instagram and Facebook are doing very little to prevent children below this age from accessing their services. The DSA requires platforms to enforce their own rules: terms and conditions should not be mere written statements, but rather the basis for concrete action to protect users – including children.”
The Commission now believes Instagram and Facebook must change their risk assessment methodology, strengthen their safety measures, and “effectively counter and mitigate risks that minors under the age of 13 could experience on the platforms.”
Facebook and Instagram can now respond and try to fix the issues. If the Commission is unsatisfied, a non-compliance decision could trigger a fine “proportionate to the infringement which shall in no case exceed 6 percent of the total worldwide annual turnover of the provider.”
Herein lies the problem. Meta’s turnover in 2025 was about $201 billion, meaning a fine could be around $12.5 billion. Mark Zuckerberg, who owns Meta, is estimated to have a personal net worth of $252 billion. A significant chunk of that is tied to Meta shares. Even so, the question becomes, how effective are fines for companies and people that can simply absorb them? What will it take to make Meta fall in line?
UK to have age restrictions on social media by end of year
The UK has passed the Children’s Wellbeing and Schools Bill, giving ministers the power to introduce age restrictions on social media platforms. The bill had been kicked back and forth between the House of Commons and the House of Lords, but received Royal Assent on April 29.
The Bill calls for “all regulated user-to-user services to use highly-effective age assurance measures to prevent children under the age of 16 from becoming or being users.”
That includes large social media platforms like Instagram and Facebook.
UK junior education minister Olivia Bailey has left no doubt about the government’s resolve, telling the House of Commons that “we are clear that under any outcome we will impose some form of age or functionality restrictions for children under 16.”
Platforms must file a progress report after three months, and Bailey suggests “our intention to quickly produce a response following the consultation.”
“The government has said repeatedly that it is a question of how we act, not if. But to put beyond any doubt, we are placing a clear statutory requirement that the secretary of state must, rather than may, act following the consultation. This brings forward regulations without preempting the consultation’s outcomes and does not ignore the tens of thousands of parents and children who have already engaged with us.”
“Let us be clear. The status quo cannot continue.”
Canada seeing big push for social media laws
In Canada, age assurance legislation for social media is being implemented on the provincial level, as Manitoba Premier Wab Kinew moves to “take action on things that are really harming our kids. These are forces that contribute to anxiety and depression, these are forces that lead young women and girls being trafficked and these are forces that lead to too many of our precious children taking their own lives. I’m talking about social media.”
Per Global News, Kinew gave no details about a potential age threshold or how a provincial government could have jurisdiction over social platforms. But it may not matter, as members of the federal Liberal party, which recently gained a majority, voted earlier this month to set 16 the minimum user age for social media under an eventual law.
Global quotes Quebec MP Rachel Bendayan, who said she was “astonished by how many youth she personally spoke with who support the idea.”
“I was very surprised to see so many teenagers and people within the age group I was targeting tell me they were in favour of this resolution, in part because they felt they have no choice but to be on social media. So it’s not a ban for a ban’s sake. It’s something that would change the way society operates at the moment.”
Nonetheless, some commentators are not buying into the idea that a so-called ban is the right approach. Law professor and longtime digital rights observer Michael Geist writes on his blog that, “by focusing legislative attention on who is permitted to use social media rather than on how the platforms operate, an age-based ban functions as a pressure-relief valve for legislators and a gift to the companies, since it allows them to maintain existing practices while shifting the regulatory conversation to age-gating mechanisms that the platforms themselves will administer.”
“Strip away the political theatre and what remains is this: the ban will not keep most kids off the platforms,” Geist says, pointing to the situation in Australia. “It will not measurably reduce the harms (the Australian regulator has yet to find evidence that it has), and it will impose privacy and free expression costs on every Canadian who wants to use ordinary social media services while leaving the underlying platform problems untouched, because the legislation does not actually require platforms to change anything about their products beyond who is allowed to log in.”
Social media regulation has appeal globally
Geist astutely notes that provincial level legislation will cause fragmentation. But his core argument points again to the core problem: Meta and its peers appear to have no interest in complying with regulations. The idea that they would do so if the law targeted a different aspect of their operations ignores what is becoming plainly clear: social media is built to be what it is.
Coming to understand how social media harms people is not a call to enact legislation to fix the big platforms, any more than understanding the harms of tobacco is a call to legislate healthy vapes. The problem is baked into the product; even if a law says smokes can’t have arsenic in them, they’re still going to be bad for you.
The movement to legislate age assurance requirements on social media platforms can also be judged by the breadth of its appeal. This is a global issue, being driven by groundswell support from parents everywhere – and from many kids.
According to a recent report from eSafety Australia on attitudes of children and parents to social media age restrictions before implementation, although kids expressed a complex relationship with social media, “parents and children generally supported the intention behind the age restrictions, recognising that social media could cause harms and that more protection is needed to keep children safe online.
Rwanda pursuing age minimum for social media
The government of Rwanda is considering introducing a social media law prohibiting kids under 16 from using social media platforms. A New Times report says Minister of ICT and Innovation, Paula Ingabire, has confirmed that “relevant institutions are working together to draft the legislation, aimed at curbing cyber-related crimes and shielding minors from harmful online content.”
“These are the measures we have seen being implemented in other countries, but they must be adapted through collaboration with internet service providers, parents, social media companies, and children themselves, so that they understand they are not permitted to own accounts on such platforms,” Ingabire says.
“We are still a country striving to expand in terms of technology, but we are doing so in a way that minimises the negative impacts associated with it.”
Inman Grant facing death threats over social media law
Age assurance laws for social media have proven to be highly contentious, prompting callouts from some of the platforms’ billionaire owners. Rage needs a target, and for some, that has been Australia’s eSafety Commissioner, Julie Inman Grant, whom X owner Elon Musk called a “censorship commissar,” setting off an avalanche of abuse for Inman Grant and her family, who have been doxed and received death threats, according to a report from SBS News.
“There are protections – and I support them – provided to elected members of parliament, but there aren’t the same protections provided to regulators like myself,” Inman Grant says. “I’m kind of a new case, because I guess there aren’t that many regulators around the world that have been issued a dog whistle from Elon Musk.”
“It comes with a cost, but what they don’t realize is: the more they target me, the more I dig in.”
Article Topics
age verification | Australia age verification | Canada | digital identity | Digital Services Act | eSafety Commissioner | EU age verification | Meta | Rwanda | social media | UK age verification







Comments