FB pixel

UK OSA rollout offers lessons for US lawmakers on trust, communication

Ensure laws to protect kids don’t shut down lawful sites – and understand the law 
Categories Age Assurance  |  Biometrics News
UK OSA rollout offers lessons for US lawmakers on trust, communication
 

The current debate over the UK’s Online Safety Act provides a useful illustration of how two key pillars of a successful age assurance sector are only technology-adjacent. The first, and most fundamental, is trust. Some say the unintended consequences of the OSA show that the government’s age check policy can’t be trusted, and this bleeds over into third-party providers. The second is communication – and the age verification sector says the problem is partly in how companies are interpreting the law.

Kids online safety laws can’t win if they shut down pet forums: ITIF

The Information Technology & Innovation Foundation (ITIF) offers a fairly accurate reckoning of how and where the OSA has faltered, and suggests those flaws “should serve as a cautionary tale to U.S. lawmakers before they go too far down a similar path.”

The ITIF points out how many underestimated how many services the law would impact. And it’s not just that platforms like Spotify and Reddit now have to implement age verification for certain content: “other services have shut down completely, not just for UK users but for everyone, out of concern about potential liability for running afoul of the rules.” These include gaming platforms, but also online forums for cyclists and sustainable living.

Vague language around what’s “harmful” and what’s “highly effective” is identified as a problem. “While some content clearly falls into these categories, other content is less certain,” says the ITIF. “However, online services risk penalties if they fail to remove content that regulators later deem harmful. As a result, they have an incentive to take down more content than is necessary to stay in compliance with the Act.”

Tellingly, a third major problem with the law is that most adults just didn’t really clock that it would affect them. The idea that “age verification” is a way to keep kids out, and as such only applies to kids, misses the fact that platforms must verify the age of every user to determine who is old enough to enter. That means what’s framed as “child online safety measures” have been a rude awakening for many adult users.

The ITIF has three major recommendations for U.S. policymakers. First, “proposals should balance children’s safety with adult privacy.” Second, proposals should “avoid collateral damage” by narrowing targets to truly harmful online content: “children’s online safety laws will keep facing significant public backlash if their most visible impact is shutting down online forums about pets and sports.”

Third, they should “be cautious about extending restrictions to lawful content.” Regulators will need to be “crystal clear about what types of lawful but harmful content they want online platforms to restrict, to prevent them from unnecessarily limiting content.”

The points are well-made, as is the larger argument: the UK has led the expedition into the wilds of age check legislation, and cleared away some of the thickest vines. The U.S. need not get caught in the same tangles as it walks the same path.

Patchwork of US data laws adds to culture of distrust

The Age Verification Providers Association (AVPA) has issued a pair of statements in defense of  age assurance and the OSA.

The first is in response to a brief from the Center for Democracy & Technology (CDT) on U.S. age assurance legislation. In effect, AVPA goes through CDTs concerns, and concludes that the two parties agree on many matters concerning transparency, user agency, privacy by design, and strict deletion policies. “Where we diverge is that today’s solutions already offer many of these safeguards, and the policy environment in places like the UK now demonstrates the capability to deliver a high level of age assurance at national scale.”

The kernel of the issue is cultural difference. AVPA astutely points out that “it is harder to win trust in the U.S. because there is no comprehensive federal privacy law, so people rely on a patchwork of state rules and sector laws. That inconsistency understandably breeds doubt about who holds personal data and for how long.”

The group contrasts this with the UK, which is governed by the European standards of the GDPR. It notes that “strong age checks for both social media and pornography came into full force on July 25, 2025, and services have been delivering them at scale since then. We have not seen any major security incidents reported against certified age assurance providers in that first month, which matters for public confidence.”

Section 22 free speech measure in OSA needs more love

The second missive aims to clarify what, exactly, the OSA covers. The law has lately faced pointed (and ideologically driven) accusations of censorship from the Trump regime – which, says AVPA, misunderstands how it actually works.

“Far from censoring legal content, the OSA targets only illegal material, aligned with existing offline restrictions, while Section 22 introduces a groundbreaking statutory protection for freedom of expression in UK domestic law,” says AVPA’s post.

“Claims that the OSA enables censorship of legal content often stem from misinterpretations or early implementation hiccups, not the Act’s legal framework. For instance, some platforms initially over-blocked content when child safety duties took effect in July.”

Three reasons are given for this. First, “platforms misread the OSA’s requirements, applying overly broad filters to avoid penalties, and not noticing the duty to protect freedom of expression.” Second, some may have skimped on the technology needed to “target the narrowly defined ‘primary priority content’ (i.e. pornography, information on how to harm or kill yourself or starve yourself to death).”

The third is deliberate overreach. “Certain platforms, possibly to provoke backlash, blocked contentious but legal content such as Gaza-related coverage fueling opposition to the law.”

The free speech protections in Section 22, AVPA says, provide a legal route for organizations hosting lawful content facing restrictions. Section 22 mandates “a duty to have particular regard to the importance of protecting users’ right to freedom of expression within the law” when designing safety measures.

And it is every bit as enforceable as the OSA itself. “Ofcom’s codes of practice embed Section 22’s safeguards ensuring platforms that follow recommended measures comply with free speech duties,” AVPA says. Therefore, “over-blocking lawful content risks enforcement action from Ofcom reinforcing the Act’s commitment to expression.” That means anyone hoping to shut down a site expressing legal opinions they don’t like risks noncompliance, “as they fail to balance safety duties with free speech obligations.”

“The OSA is not a censorship tool,” AVPA says, in direct rebuke to certain opinions aired in Washington this week. “It’s a balanced framework that targets illegal content while embedding robust free speech protections.” The OSA’s restrictions have gotten plenty of attention; it’s time for Section 22 to get its time in the spotlight.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events