FB pixel

To ban or not to ban? UK debates age restrictions for social media platforms

Loudest voices in favor come from ‘desperate’ parents, kids, says Kidron
Categories Age Assurance  |  Biometrics News
To ban or not to ban? UK debates age restrictions for social media platforms
 

Despite recent legal dead-ends, the UK continues to move toward an Australian-style law putting age restrictions on large social media platforms, and requiring the implementation of age assurance measures. Last week, the Science, Innovation and Technology Committee held a one-off evidence session to explore the idea, following on the recent launch of the government’s formal consultation.

The session as recorded progresses in a tripartite structure, giving representatives on both sides of the debate the floor to present their respective cases.

Round one: social media is awful and needs age restrictions

Round one welcomes Dr. Rebecca Foljambe, the founder of Health Professionals for Safer Screens, and Frank Young, chief executive of Parentkind – both organizations that claim to represent the voice of parents.

Young begins by pointing to the numbers.

“When we poll parents – they are a broad cohort – normally they divide something like 30/30/30 or 40/40/20; they are equivocal,” he says. “There is one issue where they are not equivocal, and that is social media bans and social media harms. Parents are overwhelmingly concerned about social media and the risk of their children’s access to social media. About 93 percent of parents tell us that they think social media is harmful. You never see 93 percent in parent polling.”

It’s not just parents, either: among 16 to 18-year-olds that Parentkind polled, “about one in five told us that social media had at some point made them feel that life was not worth living. About one third are regularly receiving sexual imagery or messages from strangers. That rises to about half for teenage girls.”

Foljambe notes that, while she would like more hard evidence, the lack of it is due to scant research, not unconcern. Quite the opposite: “the Academy of Medical Royal Colleges, which is made up of 22 medical colleges from different specialties, recently came out and declared this issue a public health emergency.”

“It is important to say, from our perspective as clinicians, that we consider what we see in practice as valid evidence of the harm that children are coming to on these platforms. We believe that this harm is increasing day by day, week by week, year by year, and it is showing no sign of decreasing.”

‘Content’ includes killing puppies, calling women fat

Those asking for specifics can have them. Foljambe says misinformation spread on social media has led to a 50 percent decline in the uptake of hormonal contraception (birth control pills) over the past 15 years. “Women’s health leaders in this country say that that has been predominantly fuelled by misinformation on TikTok, telling young women that hormonal contraception is going to make them fat, infertile and cause brain tumours.”

More teenage pregnancies means more abortions, and also more inequality.

Then there are the certain categories of content that once lurked on the fringes of the internet, but have found welcoming homes on some social platforms. “We know that there is an absolute epidemic of CSAM on the internet: naked pictures of our children everywhere,” Foljambe says.

Sexual strangulation, animal cruelty, content promoting eating disorders: the list goes on. And it’s not just that the content is there; it’s also being delivered by a mechanism that was, like cigarettes, purposefully engineered to addict its users. Round one encompasses debates about the right age threshold, and the scope of a potential law (do AI and gaming count?), but the broad strokes message can be boiled down to the following:

Social media does bad things to kids’ developing minds. The clinical evidence is building, but in the meantime, if you want proof that it’s bad, visit a platform like X or TikTok and see what you can find. Or ask parents. Or, for that matter, kids.

Round two: it’s the law, isn’t it?

Round two welcomes the familiar figure of Baroness Biban Kidron, who begins by stating her interests as an adviser to the Online Safety Act Network and chair of the Digital Futures Commission. Kidron’s take is tied to the law: she says failure to enforce the UK Online Safety Act (OSA) is “the context for anything we say.”

The question of semantics and definitions take center stage in the discussion. What should be covered? What shouldn’t? What lessons can be taken from Australia, which implemented its trailblazing Social Media Minimum Age Act in December 2025. The word “ban” is a matter of contention, of particular frustration to Kidron. “Let us not concentrate on whether the ban is the best way,” she says. “The question is, ‘What do we do now?’”

“You heard from a doctor. You heard from a parents group. I have evidence in front of me that I intend to leave with the Committee from the police. I get calls from lawyers. Teachers are incandescent. Everywhere where the adult world intersects with children, they are crying out for help.”

Kidron dismisses the government consultation as theater: “It is managing the call for the ban rather than managing the call to make the internet safe for children.”

“If the government is not going to move but will keep on consulting and consulting and consulting, let us do something that is not tone deaf and lets parents, doctors, lawyers and police know that they are hearing them.”

For Kidron, the questions on the table are moot, compared to the ones happening on kids screens. She points to live streaming as an example of a format that the law has not even caught up with yet. Meanwhile, “chatbots and friendship bots are the new emerging harm.” As to the perpetual question of whether or not effective age assurance measures are available: “yes, yes, yes. You can have privacy preserving age assurance. Age assurance is not perfect, but it is absolutely possible. You can double-check from the difficult ages of 16 and 17 up to 21 or 22.”

What if some kids need cigarettes to feel safe?

Dr. Kim Sylwander, research manager and researcher for the Digital Futures for Children Centre at the London School of Economics and Political Science, brings up the oft-cited concern that, while kids don’t want being online to hurt them, they still want access to online spaces – particularly marginalized youth.

“If we take the Australian setting as an example, there were a lot of child and youth groups that came forward and said no. LGBTQ groups and minority youth groups came forward and said that they were not in favour of a ban because they were afraid that their safe spaces online, which they have established through social media, will be dismantled and they will not have access to them.”

“In our consultations with children around the world and all over the UK, we hear again and again that they want to be online, first of all, and they want the benefits of social media, but they do not want the harms. They want us to regulate away the harms.”

This leans toward the “social media as public square” argument, which suggests that, properly managed, social media platforms are important, neutral spaces for people to connect, rather than corporate enterprises aiming to reshape the way humans consume information. It fails to ask a related question: if marginalized youth congregate in online spaces that seem public, but in fact operate at the behest of those close to power, how protected are they, really? In January 2025, Meta CEO Mark Zuckerberg himself announced changes to the company’s Hateful Conduct policy that were seen by the LGBTQ community to be a hostile act.

The discussion turns to the question of safety by design – which is the same question, turned on its head. Round two assumes that social media is something that regulations can shape into a useful, beneficial tool – and ignores questions about whether or not it is, in fact, harmful by design. That young people can’t buy cigarettes doesn’t change the fact that they cause cancer.

To quote Baroness Kidron, “we need to stop treating tech as if it was a responsible adult or – let me put it more strongly – a responsible parent to our children, because it is not.”

Kidron also states what is obvious to anyone over 45, but may sound like a revelation to today’s youth: “I would just challenge the idea that social media is the only way to get information. The digital world is far broader.”

Round three: tales from the age assurance front line

The third and final round of the session passes the baton to Australia, and specifically to eSafety Commissioner Julie Inman Grant. Along with Professors Jeff Hancock and Amy Orben, Grant is there to offer context in the form of data from Australia’s first few months under the SMMA act.

In terms of the challenge, Grant says recent work from the eSafety team “found that 70 percent of young people encountered various harms on social media, as well as across the broader internet on gaming and messaging platforms.”

Grant, who has worked in the tech sector for two decades, says the threat surface is changing. “In my first eight or nine years we saw image-based abuse, cyber-bullying, and child sexual exploitation and grooming. We have seen about a 1,300 percent increase in sexual extortion reports, and that is largely targeting young men between the ages of 16 and 24. We are also seeing young people encountering a lot more online hate and a lot more misogynist content. We are doing a lot of work around being a young man online, and what they are experiencing with the manosphere. We are also seeing more things such as violent fight videos. The threat surface is changing, but unfortunately the harms that have been there for the past 30 years that there has been an internet continue to thrive, sadly.”

Grant has adopted the term “social media delay” to replace the language around a ban. “We need to flip the script from saying that we are trying to prevent children from accessing technology. Really, we are trying to prevent social media companies that have become quite predatory from accessing our children.”

For proof that they have become predatory, look at their lawyers. When you are looking at something like this, it is technical regulation meets social regulation, which is very, very complex,” Inman Grant says. “You are dealing with some of the most powerful and richest entities in the world, which will gladly use jurisdictional arbitrage, deep pockets and expensive lawyers to get around compliance.”

Study aims to gather critical mass of evidence

Professor Jeff Hancock, the founding director of the Stanford Social Media Lab, is working with Inman Grant to evaluate the SMMA. The law has been deemed a qualified success in its early days, but a comprehensive study will examine its longer term impacts on young people and families.

“What do people think about these bans? What are the other parents thinking? What will be these norms that will then potentially change behaviour?” Hancock lists some of the questions the study will focus on. “Ultimately, does it lead to different kinds of intended and unintended consequences? One thing the commissioner made clear to us is that she is interested in understanding both those intended consequences – getting the kids back out on to the footy fields, as the Australians also say – and the unintended consequences. Is this problematic for some of the young people or their families?”

The jury is still out on a social media delay for the UK. But global headwinds suggest a mass movement is coming. Inman Grant reflects on how Australia, “a small middle power in the south Pacific,” has paved the way for a global shift in how society understands social media and online regulation.

“After being the only online safety regulator for the first seven years, having Ofcom, the Irish, the European Commission, Fiji, Singapore and others come on board has been amazing,” she says. And the legislation is infectious: recently, the Nigerian Data Protection Commission launched a survey to gather feedback on the proposed regulation of a minimum age for social media use in Nigeria.

“You might know that we started the Global Online Safety Regulators Network with the UK, Ireland and Fiji in 2022, and that has been growing from strength to strength,” Inman Grant says.” My hope is that we will have regulatory coherence for countries that decide to follow this path.”

“Again, we are going first.”

AVPA gets precise on policy focus

On LinkedIn, the Age Verification Providers Association (AVPA) says it has submitted written evidence to the UK, drawing on the Australian Age Assurance Technology Trial, “to reassure the Committee that the technology exists to deliver age-related policies, be that a ban, a delay or just extra protections for kids online, and is already delivering change in Australia (aligned with ISO 27566-1 and IEEE 2089.1).”

But the letter also calls attention to details around accessibility, age thresholds and the differences between age estimation and age verification.

“The Committee should be aware that checking whether someone is above or below an age threshold below 18 is inherently more challenging than verifying that someone is an adult. Adults typically have multiple data sources that can confirm their age,” it says. “Younger users often do not.”

Where does policy want to press, and where is there wiggle room? Is the priority minimizing false positives, or false negatives? AVPA says a hybrid approach could “supplement estimation technologies with access to authoritative data sources that can confirm the age of younger users when estimation alone cannot provide a sufficiently precise answer.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events