FB pixel

Rage verification: objections to UK, US online safety laws get heated

Political positions shape arguments as age assurance becomes mainstream
Rage verification: objections to UK, US online safety laws get heated
 

In the course of a few months, online age assurance – and the age verification or age estimation tech that enable it – has gone from a niche topic to a feature of headlines around the world. Two primary catalysts are the commencement of enforcement of the UK’s Online Safety Act in July and the U.S. Supreme Court decision ruling in favor of Texas’ age assurance law in late June. But as online safety laws spread, so too does the theater of war over what they mean for individual rights, privacy and access to information. 

CDT calls SCOTUS decision ‘significant shift for the First Amendment’

The marquee stories remain the biggest targets for digital rights groups. Today, the Center for Democracy and Technology held a press briefing to provide an overview of its views on age verification policy globally, including the OSA and the U.S. landscape. 

The CDT has argued that the SCOTUS decision in Free Speech Coalition v. Paxton – the former a trade group for the porn industry, the latter Texas’ attorney general –  is bad law, calling it in a published statement “a significant shift for the First Amendment generally and for laws restricting access to sexual content online specifically.”

A statement from the organization argues that the court’s opinion “is limited in important ways, and legislatures should not view it as carte blanche to impose age restrictions on access to speech online.” Indeed, “it would be a mistake to read this decision as permitting age verification requirements to access content beyond the narrow category of speech that is obscene for minors.”

The SCOTUS opinion stated that “the First Amendment leaves undisturbed the States’ traditional power to prevent minors from accessing speech that is obscene from their perspective. That power includes the power to require proof of age before an individual can access such speech.”

“It follows that no person – adult or child – has a First Amendment right to access such speech without first submitting proof of age,” which was deemed to “only incidentally burden the protected speech of adults.”

What counts as obscenity? Depends who you ask

The CDT casts a hard look at the question of what is legally considered obscene for minors, and says SCOTUS “reiterated the constitutional test” for what counts. The scope covers whole works that “appeal to the prurient interest of minors,” which “depict or describe specifically defined sexual conduct in a way that is patently offensive for minors,” and “lack serious literary, artistic, political, or scientific value for minors.” For legislators, this typically means pornography. 

The CDT, however, is rightly worried about who gets to decide what’s good for kids and what isn’t, and what counts as having “serious literary, artistic, political, or scientific value” for them. 

Hence CDT’s emphasis on how the Paxton decision is limited in scope. It notes a previous Supreme Court case in which “girlie magazines” such as Playboy were included. It underlines, in the Paxton opinion, how the court noted that speech obscene for minors “cannot conceivably be read to cover, say, a PG-13 or R Rated movie.”

The problem, however, is not that legislators will try and ban Back to the Future from the school library. The CDT’s concerns about overreach are legitimized by book bans in the U.S. that are imposed using the same language that is used to regulate porn – although they target LGBTQ resources instead. For instance, in 2024, the South Carolina Senate made library funding conditional on libraries removing “books appealing to children’s prurient interest” from their shelves – which assumes there are obscene books in a library to begin with.  

The CDT says the SCOTUS decision, it says, “creates significant risks for educational speech about sex and reproductive health and for the LGBTQ community’s speech.” With mounting evidence that the U.S. right wing is actively working to revoke LGBTQ rights (book bans are surging), it is difficult to argue with them. 

There is a fairly recognizable common threshold for obscenity. In a casual survey, most adults would probably agree that kids shouldn’t be regular consumers of adult content that explores extreme aspects of human sexuality. Should eight-year-old Timmy be allowed to watch stepmom fetish videos (a major component of many porn tubes)? Or BDSM? While rarely mentioned, this content is exactly what is all over the sites in question, and is what laws targeting “obscene materials” are meant to restrict in spirit. CDT’s point is that things may well go differently in practice. 

Scope creep a concern as OSA impacts non-pornographic sites

On the flipside, in a casual survey, most adults would probably agree that everyone, including kids, should be able to view Wikipedia, listen to Spotify and use the internet as a valuable resource and network. Yet the UK OSA has led to age assurance being imposed beyond porn. 

In an opinion piece from the Center for European Policy Analysis, author Elly Rostoum says the OSA “misses its mark.” Rostoum notes the explosion of VPN use after age assurance rules, as a symbol of the complexity of the larger issue. She says that “what’s unfolding in Britain is part of a wider debate, with global implications, calling into question how much a single country can impose rules on the internet, which, by definition, has little respect for borders.” 

Potential solutions “risk conflating identification with safety.” 

“Effective age verification often means facial scans, AI-powered selfie analysis, or uploading a government-issued ID. Each method raises sharp privacy concerns about privacy and data security. Who controls and stores that data? How can it be kept safe from leaks, hacks, or misuse?” 

There are people who have answers to those questions, even if they are not perfect. However, the larger din tends to drown out explanation in favor of panic on both sides.  

Social media a gray area

Nowhere is this more evident than in the case of age assurance legislation for social media. The OSA has seen certain forums on Reddit dedicated to LGBTQ content and public health placed behind age gates (in Reddit’s case, provided by Persona). In the U.S., major players like Meta and X are waging endless court battles to stamp out state-level laws. And in Mississippi, HB 1126, or the “Walker Montgomery Protecting Children Online Act,” has caused Twitter alternative Bluesky to block users in the state.  

While NetChoice, the lobby group for Big Tech, has tried to block the law, a statement from Bluesky says a recent Supreme Court decision to let it stand (for now) “leaves us facing a hard reality: comply with Mississippi’s age assurance law – and make every Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site – or risk massive fines. The law would also require us to identify and track which users are children, unlike our approach in other regions.” 

“We think this law creates challenges that go beyond its child safety goals, and creates significant barriers that limit free speech and disproportionately harm smaller platforms and emerging technologies.”

The point about smaller platforms is well-made; as Rostoum notes, “industry giants like Meta or Apple can absorb the cost of such compliance; smaller platforms and forums often cannot,” and the ability of Big Tech to dominate age assurance has been an ongoing concern in the UK.  

Leaks, breaches, honey pots, oh my!

The core issue is that these laws, while intended to protect kids, mean that all users have to prove their age to use services they have long used. Groups like the CDT say this creates a barrier to lawfully protected speech. They argue that it puts personal data at risk, pointing to the recent data leak from the Tea app, a dating app for older women, and to the thousands of other data breaches that happen every year in the U.S. They speak of honey pots, and of the “chilling effect” age assurance could have on internet users such as women in abusive relationships. 

They continue to repeat the allegation that algorithmic age verification and age estimation tools show significant bias, going so far as to quote a Yoti paper saying “there is a higher error rate for women aged 25-70 for women with darker skin tones.” 

There is a fair amount of conjecture in many of these arguments. Furthermore, while there is the potential for conservative movements to use this technology as an excuse to restrict rights, there is also the potential for its value to be dismissed on claims that are equally based in the opposing political ideology. As one example, biometric testing by the National Institute of Standards and Technology (NIST) has found that the majority of facial recognition algorithms are more likely to misidentify people with darker skin, women and the elderly – but the most accurate algorithms show very low differentials in the Institute’s latest testing, and the question of skin tone could be mixed in with other factors

Another is the assertion that “face scanning is distrusted.” Face biometrics provider Yoti’s wide adoption in the wake of the OSA means it increasingly has proof that its product is trustworthy at scale. Moreover, while the CDT suggests that there is “future promise” in zero-knowledge proofs and tokenization – as though France has not already made double-blind age assurance mandatory. 

Both sides agree families know best, as world becomes Panopticon

One argument shared by both sides of the political spectrum, on both sides of the Atlantic, is that teens and families should be able to make decisions about what works for them. However, what works for parents may not work for their teens, notably on the question of LGBTQ rights. 

CDT quotes a 17-year-old who claims that, if they downloaded an app and it asked them to take a picture of themselves, they would think it’s a scam. The intention is to illustrate how users do not trust age verification or age estimation technology, because they fear it may be harvesting personal data. In the context of social media, this means literally objecting to rules requiring the provision of minimal data so teens are able to access sites that have been engineered as data guzzling machines. Being worried that your teen has to prove their age to get on TikTok is like arguing to save a shark as it gobbles up your child. 

The takeaway from all the noise is that laws are contextual. The First Amendment is an American principle; as such, it should not be imposed on sovereign nations, as the Trump administration is now attempting to do with tariff threats and proposed sanctions on EU member states implementing the Digital Services Act. 

One major issue that will be difficult to address in the current United States is the absurdity of arguing over a would-be sacred free speech law that is in the process of being shredded into meaninglessness by the federal government. At the same time, the Trump administration continues to underline CDT’s major point: the danger of surveillance and misuse of personal data becomes a reality when there are people in charge who are not just willing, but eager to use it to retain authoritarian control.   

The world we have built is in flux. The question of how to weigh concerns over privacy with concerns about online safety is fraught with the newness of the internet and its permutations. As age checks roll out globally, they will continue to face opposition from campaigners who understand how badly things can go when privacy is violated. But they will also be part of a world in which smart glasses record conversations, TikTok is a viable career for kids, and many of the old rules no longer apply.  

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

 

87% of failed biometric verifications in Southern Africa due to AI spoofing: Smile ID

A new report spotlights deepfake fraud posing an acute problem for Africa. Digital identity, banking and e-government are being used…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events