FB pixel

Age verification for social media, chatbots go viral with regulators globally

UK, Europe, Canada, Australia all probing chatbot problem, considering age assurance laws
Categories Age Assurance  |  Biometrics News
Age verification for social media, chatbots go viral with regulators globally
 

Age assurance has achieved memetic repeatability. Australia’s efforts to keep kids under 16 off big social media platforms are being replicated elsewhere. And the new threat presented by AI chatbots that create nonconsensual porn and counsel violence is spurring even more iterations. The virality borne of social media has been turned against it, as governments weigh the right response to calls for tighter restrictions.

UK public consultation on social media to spark ‘national conversation’

The UK has launched its promised public consultation on whether or not to impose age restrictions on social media platforms.

A release from the Department for Science, Innovation and Technology (DSIT) calls it “the world’s most ambitious consultation on social media” and says it will include real-world pilots with families and teenagers, to examine how potential social media restrictions could work in practice.

Key questions to be explored include what age should be pegged as the minimum for accessing social media, whether to allow addictive design features such as infinite scrolling and autoplay, the possibility of mandatory overnight curfews, the status of AI chatbots – and “how age verification enforcement should be strengthened.”

The results of the consultation will inform the UK’s path forward on age assurance legislation. While there is likely to be objection from those with privacy concerns, the smart bet is that the UK will follow Australia’s lead in limiting a selection of very large social media platforms to users 16 years of age or older. Stakeholders across the spectrum cite a groundswell of pressure from parents as the primary political driver.

“We know parents everywhere are grappling with how much screen time their children should have, when they should give them a phone, what they are seeing online, and the impact all of this is having,” says Technology Secretary Liz Kendall. “This is why we’re asking children and parents to take part in this landmark consultation on how young people can thrive in an age of rapid technological change.”

The consultation is open for three months, closing on May 26. It is “open to everyone with a view.” This includes parents and caregivers, young people, youth workers, civil society organisations, academics and representatives from related industries, including providers of biometrics and digital identity for age verification or facial age estimation.

The government “will respond in the summer, acting swiftly on the evidence gathered,” and leveraging new legislative powers to do so.

Starmer supports outreach to explore online harms laws

The consultation is not all formality, says DSIT. “Alongside the formal consultation, the government is launching one of its most wide-ranging national conversations on a public issue in recent years. Over the coming 3 months, families, young people, and communities across the UK will be invited to share their views, including through dedicated children’s and parent’s versions of the consultation. The national conversation will include community events, MP-led local conversations, influencer roundtables, and engagement through schools and civil society organisations. A parallel academic panel will also assess the developing evidence base, drawing on international experiences from countries including Australia.”

UK Prime Minister Keir Starmer is expected to support a ban, having pivoted from an initially hesitant tone to a more confrontational stance, telling the big social media companies to “bring it on.”

ICO’s Children’s Codes clear in what’s expected: Tombs

Commenting on the proposal on LinkedIn, Yoti CEO Robin Tombs calls attention to the Information Commissioner’s Office (ICO) ‘Children’s Code Strategy Progress Update’ – and the hefty £14 million (about US$18.9) fine it has levied on Reddit for unlawfully processing the data of UK users under 13.

Reddit has appealed the fine. But Tombs says it’s not likely to succeed in overturning it, noting that “18 months ago the ICO made clear that platforms’ use of personal info of children U13 was a priority.”

In short, says Tombs, it’s been clear for some time what’s expected of platforms – more than enough time to look at biometric age assurance technologies as a path to compliance. Age estimation, for instance, which “doesn’t require documentary evidence and could be a more privacy-friendly method.”

Regardless, “platforms should ensure their age assurance has an appropriate level of technical accuracy, reliability, robustness and fairness, based on the level of risk posed” – and they should be careful to monitor for any instances of unlawful data processing under the Codes.

Poland joins list of nations pursuing age assurance laws

In the parlance of early social media, one might say that legislation putting age restrictions on social platforms have gone viral. Australia was patient zero, having implemented its Social Media Minimum Age act in December 2025. Since then, a handful of nations have signaled their intention to do something similar. In Europe, Denmark, Greece, France and Spain have made movies toward legislation.

The latest to express interest is Poland. In an interview with Bloomberg, Education Minister Barbara Nowacka cites a “decline in the intellectual competence” of children and young people as a motivating factor for the addition of an age assurance feature in the mObywatel (mCitizen) app.

Chatbots continue to draw regulators’ attention

Chatbots are emerging as the next legislative target, in what looks like an early game of legislative catch-up with the pace of technological change. While OpenAI has tested an algorithmic age inference tool similar to YouTube’s, it is also answering for a growing pile of corpses.

Australia has also led the charge on AI, having registered industry-drafted codes last year that tighten the rules for age appropriate content, “including the clear and present danger posed by mostly unregulated AI-driven companion chatbots.” Ireland recently accelerated its legislative process on age laws in response to issues with chatbots producing nonconsensual deepfake nudes. And Canada is “looking into” tightening regulations on chatbots, which include popular large language models ChatGPT, Claude, Gemini and Grok.

This week, Australia turns up the regulatory dial. Reuters reports that the eSafety Commissioner has issued new regulations that require AI chatbots to “restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to $49.5 million Australian (about US$35 million).

The regulator promises to “use the full range of our powers where there is non-compliance.” Notably, this encompasses “action in respect of gatekeeping services such as search engines and app stores that provide key points of access to particular services.”

Canada might legislate AI chatbots, might not

For now, Canada’s plans to legislate AI chatbots are mostly theoretical, although there is public support for the idea. A Globe and Mail report cites a national survey of 1,424 Canadians, carried out by the Centre for Media, Technology and Democracy, which found “widespread concern among Canadians of the risk posed by chatbots.” Notably, three quarters surveyed expressed concern about emotional dependency on chatbots.

Per the Globe, AI Minister Evan Solomon has “signalled that in forthcoming privacy legislation, he is preparing to take steps to ensure that chatbots and other digital platforms cannot collect and use the data of children, including for marketing purposes.” Culture Minister Marc Miller is expected to bring the bill forward.

However, Solomon has been a vocal proponent of AI, and planned to approach the issue with innovation in mind.

The Canadian Press aptly sums up the situation: “Ottawa hasn’t confirmed it’s considering a ban, but has also not ruled it out.”

The issue has taken on new urgency, however, in light of the mass shooting in Tumbler Ridge, British Columbia. Per a report from the CBC, ChatGPT “acknowledged it flagged and banned” an account belonging to 18-year-old shooter “about half a year before she killed eight people, most of them children, and then herself on Feb. 10.”

Given that the shooter was 18, age assurance technology may not have stopped her from interacting with ChatGPT. However, the regulatory permissibility of the communication suggests that the bigger problem might be using ChatGPT at all.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Ambitious biometrics projects need clear roles for success

Biometrics technology development has long been the fixed domain of experts, and while public bodies like NIST have played a…

 

Who holds the keys to digital sovereignty? It might not be who you think

As governments think more about digital identity as a pillar of digital public infrastructure, and therefore a matter of vital…

 

Nigeria wades into social media age assurance debate with pubic survey

A survey has been released by the Nigerian Data Protection Commission to gather feedback on the proposed regulation of a…

 

Spain’s Digital Transformation Ministry backs Sybol with €500k

A Spanish digital transformation agency is helping to fund digital identity development and verifiable credentials. The Spanish Society for Technological…

 

Ethiopia’s digital ID joins sovereign wealth fund as weekly enrollments reach 1M

Ethiopia is accelerating its efforts to reach 90 million digital ID enrollments this year, with the National ID Program (NIDP)…

 

Vendors push deeper into high assurance identity verification

Digital identity vendors are accelerating product integrations as businesses look for stronger, more seamless ways to verify users across sectors….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events