UK ICO wants platforms to go further on age assurance

The UK’s Information Commissioner’s Office (ICO) has urged social media and video-sharing platforms to significantly strengthen their age assurance systems so that young children are prevented from accessing services that are not meant for them.
In an open letter, the UK’s independent regulator also supported Ofcom’s just-announced call for platforms to enforce minimum ages and to ensure algorithms are designed to prevent children from seeing harmful content.
“Age assurance technologies have rapidly advanced in recent years,” the open letter says, “creating new viable and privacy friendly solutions that can enable you to much more accurately identify if children are 13 or over before they are able to access your service.”
The regulator says platforms with minimum age rules must stop relying on self‑declared ages, which children can bypass with ease, and instead adopt the robust age assurance technologies that are now widely available.
“If your service is not suitable for children under a minimum age set out in your terms of service, you should therefore prevent access to children under your minimum age by implementing an effective age gate,” the open letter says.
“Given the advances in age assurance technologies, we expect services to be making use of current viable technologies — examples include but are not limited to, facial age estimation, digital ID, or one-time photo matching — when enforcing minimum age requirements.”
The ICO has written directly to TikTok, Snapchat, Facebook, Instagram, YouTube and X, asking each company to show how their current systems meet these expectations.
This marks the next phase of the ICO’s Children’s Code strategy, with the code pushing platforms to improve children’s privacy protections, the regulator now believes companies must go further. This includes being able to accurately identify which users are children, so those users receive the safeguards to which they are entitled.
The ICO has recently taken enforcement action against platforms that failed to do so, including fines of £14.47 million (US$19.4 million) for Reddit and £247,590 ($332,000) for MediaLab (owner of Imgur) for not implementing adequate age assurance measures and unlawfully processing children’s data in ways that exposed them to harmful content.
The regulator remains concerned about how platforms use children’s data in recommender systems, particularly when algorithms push harmful material or encourage addictive use. It has ongoing investigations into TikTok and Meta on this issue.
The ICO is working closely with Ofcom, which enforces the Online Safety Act, to align expectations around age assurance. Both regulators will publish an updated joint statement this month.
It’s notable that ICO’s letter comes following UK MPs vote-down of an amendment that would’ve banned social media for under 16s. That amendment called for companies to enforce highly effective age assurance to check access to online platforms. ICO is an independent regulator but its call is a warning.
The political climate has adopted a far greater scrutinizing posture toward platforms such as Instagram, TikTok and YouTube, and the call for strengthening age assurance systems may well be advisory.
“Our message to platforms is simple: act today to keep children safe online,” said Paul Arnold, ICO CEO. “There’s now modern technology at your fingertips, so there is no excuse not to have effective age assurance measures in place.”
“Platforms need to be ready to demonstrate what they’re doing to keep underage children out and safeguard those children that are old enough to access their services.”
Article Topics
age verification | biometric age estimation | children | Information Commissioner’s Office (ICO) | Online Safety Act | social media






Comments