Effectiveness of age gates for social media algorithms, chatbots next targets in UK

The algorithms that social media platforms use to determine what is presented to young UK users in their feeds will be subject to auditing under the Online Safety Act.
Ofcom Chief Executive Melanie Dawes told the Financial Times that her agency would pursue enforcement action against social media platforms that cannot prove their algorithms prevent children under 18 years old from exposure to restricted content.
The regulator has also discussed how the OSA applies to chatbots and generative AI tools.
Ofcom will look at content moderation and recommendations for sites including YouTube, Roblox and Facebook to make sure they do algorithmically deliver adult content.
OpenAI has acknowledged the applicability of the OSA to ChatGPT, Dawes says, while her comments on Grok indicate that X has not.
Lawmakers in the EU and Australia are currently grappling with whether and how to restrict social media access to young people.
In total, Ofcom currently has 69 investigations into possible violations of the OSA and its age verification rules, according to the report.
Dawes also notes a significant rollback in the use of VPNs since they surged in the wake of the OSA’s launch.
Liberal Democrat Lord Timothy Clement-Jones introduced a motion “to regret” in the upper chamber last week, arguing that it introduces “a ceiling, not a floor” on online child protection, MLex reports. He was joined by legislators from other parties.
Lawmakers argued that the codes are not specific enough to meet the differing needs of children at different ages, civil society feedback was not sufficiently acted on, that live-streaming and algorithms that promote harmful but legal content need more scrutiny.
Article Topics
age verification | biometric age estimation | chatbots | children | Ofcom | Online Safety Act | social media | UK age verification







Comments