UK online child safety rules finalized by Ofcom ahead of July deadline

New rules have been set for protecting UK children from online harms with the publication of the Protection of Children Codes and Guidance by Ofcom. They include requirements for “user-to-user” or social media platforms to implement highly effective age assurance, just like pornography sites.
The Codes also require any service with a recommender system and a medium or high risk of harmful content to filter harmful content from children’s feeds, mandate fast content review processes, more options for children to block or mute accounts and comments, and easier complaint and reporting processes.
Ofcom’s decisions are contained in five volumes that describe the regulatory approach taken, the causes and impacts of online harms to children, risk assessments, what services should do and consultation on user controls for illegal harms.
Regulators heard from 27,000 children and 13,000 parents during consultations, along with representatives of industry, civil society and charities and child safety experts.
The documents published today by Ofcom include a Children’s Register of Risks and accompanying glossary, Guidance on content harmful to children, Children’s Risk Assessment Guidance and Children’s Risk Profiles, drafts of the Protection of Children Code for search services and user-to-user services, a Children’s Access Assessments Guidance document and Guidance on highly effective age assurance (HEAA) part 3 guidance.
The updated Part 3 HEAA Guidance specifies the same seven methods as a non-exhaustive list of options for service providers as the January 16 draft. They include facial age estimation, ID document and selfie biometric comparisons, and the use of open banking, credit cards, email address, mobile network operator checks or digital identity for age assurance.
The rules for age assurance by “user-to-user services” are set out in Section 13, within Volume 4 of the Codes. This section includes an updated view that age assurance is increasingly effective for differentiating children of different ages. This change addresses, at least in part, a concern raised by the Age Verification Providers Association (AVPA) about Ofcom’s previous position last year.
While platforms are not required to set minimum age requirements, they are expected to use “highly effective means” to enforce them if they do.
The Codes include six different ways for age assurance measures to be implemented. They apply to service-wide age gating, targeting appropriate content to children and targeting protections for recommender systems used by children. Measures are set out for ensuring protection from “Primary Priority Content” and “Priority Content” in each.
Ofcom is also opening a consultation on expanding measures for muting and blocking user accounts and comments under the Illegal Content Codes to a wider range of services, including those from smaller providers. Comments are welcomed for that consultation until July 22.
Children’s risk assessments are due for all in-scope services by July 24. The Codes will go through a Parliamentary approval process, which is expected to bring the safety measures they require into effect as of July 25. The first deadline for enforcement of the Online Safety Act was March 16.
Statutory reports are also expected from Ofcom on age assurance use and app store access by children sometime in 2026.
Reactions from Yoti and VerifyMy
Yoti Chief Regulatory and Policy Officer Julie Dawson notes in an email to Biometric Update that protections against legal but harmful content are a key aspect of the Online Safety Act. Yoti welcomes the Children’s Risk Assessment Guidance and the first version of the Codes of Practice, praising the efforts of the Ofcom team and the sector to complete them.
“We trust that Ofcom is geared up to support a level playing field in terms of enforcement given the large number of companies (150,000) that are in scope of the legislation,” Dawson adds. “It would be a shame for good actors to be penalised financially for compliance if it takes months for non compliant companies to come into compliance.
“We hope that Ofcom and the ICO continue to look jointly at how age assurance can support the checking of ages under the age of 18; for instance, at 13 and 16 to support the age appropriate design of services.
“We are working with a third of the largest global platforms undertaking around one million age checks daily – including for social media, dating, adult, vaping and gaming sites. This includes 13+/- and 18 +/- age assurance that is privacy-preserving, reliable and effective. Online age checking is no longer optional, it’s a necessary step to create safer, age-appropriate experiences online.”
“As a result of the big changes announced today, age checks should become the norm on some of the most popular websites and online services. Large or high‑risk platforms will have to deploy highly effective age assurance to block children from entire services or specific material like pornography, self-harm and hate content,” VerifyMy Head of Regulatory and Public Affairs Lina Ghazal says in an emailed statement.
“Knowing the age of your users is no longer optional – it is the baseline. Without this, platforms are effectively flying blind and hugely exposed to risk. The good news is that readily available technology, such as email-based age checks, can allow platforms to determine the age of their users quickly and effectively while also preserving their privacy.
“Importantly, the codes also address the functionality of large sites that cater to both adults and children under the same roof. This should prevent younger users from being targeted by predators via direct messages or added to group chats without their consent, enabling bullying.”
Ghazal also lauds the stronger rules for content moderation and recommendations.
“In a refreshing twist, children’s voices will be at the heart of this regulation, so if they see inappropriate content, it will be easier for them to report it. They will be aided by more user-friendly complaints processes and a code of conduct for employees at relevant services focused on raising the bar on safeguarding,” she says.
“This will feel like a rip-off-the-plaster moment for thousands of platforms, including the social media giants, but these changes have been a long time coming, and will take time to fall into place. A huge spring clean will now begin, and, thanks to these codes, by the summer, children across the UK will have safer and more age-appropriate experiences online.
“When the dust settles on today, eyes will naturally turn to enforcement. The new rules will be effective from July 25th, and thousands of platforms will need to pull their socks up. Protecting children online is not a one-time fix. It’s an ongoing responsibility. Platforms must continue to assess risks, monitor the effectiveness of safety measures, and adapt to emerging threats. The online world evolves rapidly – and so must the systems that keep children safe within it.”
Article Topics
age verification | biometrics | children | digital identity | Ofcom | regulation | UK | VerifyMy | Yoti
Comments