FB pixel

Age assurance proposals show healthy policy activity but suffer from omissions

Answer to online age check question is not giving your driver’s license to a store clerk
Age assurance proposals show healthy policy activity but suffer from omissions
 

An opinion piece in Tech Policy Press suggests that the solution to privacy concerns around online age checks may be staring us right in the face.

The article by Dalhousie University law student Finn Mitra uses California’s Digital Age Assurance Act, tabled in October 2025 as AB 1043, as a jumping off point for an argument against platform-level age assurance that ends up in a place many in the digital ID community would prefer not to go: the store.

Mitra makes the often-repeated assertion that platform level age checks rely too heavily on the collection of personal data.

“Some of the most common methods to verify users’ age can often require sensitive personal data, including government IDs and facial scans,” he says. “Due to the sheer number of age checks required to implement these laws, platform-level methods would likely necessitate collecting and retaining an enormous amount of the personal data.”

This is inaccurate, in that most certified age assurance technologies prioritize quick deletion of any personal information collected, and facial age estimation (FAE) systems do not require the retention of biometric data at all. Mitra appears to believe that age verification measures for pornographic websites mean that Pornhub retains a database of user identities that can be hacked.

Mitra points to last year’s Discord breach, in which over-retention of data was an explicit factor, as an example of why platform-level age assurance doesn’t work – and in doing so, raises an important point: not every platform will follow best practices. Nonetheless, his assertion that “platform-level frameworks offer dubious benefits,” and a casual reference to NIST’s Face Analysis Technology Evaluation: Age Estimation and Verification (FATE AEV) as evidence of bias, are under-informed.

While NIST has found that the majority of facial recognition algorithms are more likely to misidentify people with darker skin, women and the elderly, the top algorithms show very low differentials in the Institute’s latest testing.

Pitch: self-declare age and let Big Tech handle all the data

California offers an alternative to platform-level age checks with its model, which mandates that parents or users declare age during initial setup of a device such as a smartphone or laptop. That information is encrypted and shared with apps as minimally relevant data in the form of an age bracket, so they can provide age-appropriate content. In this, it echoes Apple’s Declared Age Range API model.

The problem is that it puts self-declaration of age back at the beginning of the age check process. A 15-year-old setting up a new phone can easily enter a fake date of birth, making the whole process rather flimsy for the “assurance” part of age assurance.

Mitra believes California’s system “employs the least invasive means necessary to achieve the desired objective.” Ironically, he cites the mass centralization of data as a benefit: “the age verification process is centralized among a handful of OSPs already trusted to hold massive amounts of sensitive user information. In comparison to the vast number of unestablished parties collecting any given user’s sensitive data under platform-level proposals, this process significantly mitigates the likelihood and severity of potential data breaches.”

At issue here is what counts as an “established party.” For Mitra, having Silicon Valley giants hold massive troves of user data is preferable to having smaller, would-be-unproven firms perform age assurance services. But NIST, as well as certification schemes such as the Digital Identity and Attributes Trust Framework (DIATF) in the UK, exist precisely to vet such firms against benchmarks and standards. At what point does a firm like, for instance, Regula – which has been around for more than thirty years, and whose algorithm currently sits atop the NIST FAE AEV rankings – get to be considered as “established”? The question is especially pertinent if the yardstick will always be some of the largest and most powerful companies in the world.

Moreover, “established” implies “establishment,” and Silicon Valley’s titans have, in recent years, shown themselves to be willing collaborators with the U.S. government, as long as it serves thor interests. Is it really safer to give data to Google than to Idemia?

Did you know you can get that online?

Mitra’s solution is a hybrid approach that replaces self-declaration with an old-fashioned physical ID check. The model will require operating system providers to “have an authorized clerk check each customer’s ID at retail stores and carrier shops when they purchase a new device. The clerk will then attach a signal to the device representing the customer’s age.”

Mitra concedes that, “an ideal model should address accessibility for individuals in remote regions or without reliable transportation.” In the end, it must be assumed that he is somehow unfamiliar with online shopping, since the natural next question is, “what if I want to order a laptop on Amazon?” – and the answer is, you’ll need a way to check a customer’s age online.

Libertarian think tank would prefer optional law

California’s law has its defenders. A piece from the Reason Foundation, authored by technology policy fellow Richard Sill, calls the Digital Age Assurance Act “a meaningful first step toward a more privacy-preserving, age-signaling model intended to minimize data exposure while improving compliance certainty for businesses.”

Sill believes “AB 1043 correctly prioritizes privacy and security by using a self-declared age signal rather than a verification process. The law integrates core privacy-by-design principles by separating identity from compliance status and ensuring that user data never leaves local systems in identifiable form.”

The Reason Foundation, however, is arguing from a libertarian position for which the ideal would be no legislation at all. Its stated concern about the vulnerability of sensitive personal data puts a fake moustache on its main policy thrust: a “simple but meaningful adjustment,” which it says could make the law even better: “make the device-level age signal optional for parents rather than compulsory.”

In this scenario, the user opens a device, is presented with a prompt that says “do you want to provide your age information,” and a vast majority of users, trained to be cautious by limitless fearmongering about data breaches, laugh as they click NO.

Pressure will come from parents as platforms evolve

The mustering of opinions about age assurance and online safety policy shows that the issue has reached the think tanks, and that there will likely be continued pushback against digital age check technology for the immediate future. However, arguments that point policy backwards – toward handing a clerk a driver’s license to prove your age, and giving them your address in the process; or toward a vacuum in which legislation is mere window-dressing for political ideology – are unlikely to succeed.

Regulatory momentum is one thing, and the present-day U.S. has shown itself determined to redefine its norms to distinguish itself from the rest of the world, notably on issues of speech that are inextricably tied to economic power. It is less and less likely to follow Europe or the UK in legislating tech.

Pressure from below is another matter. Parents seeing what contemporary society’s version of technology does to their kids are unlikely to stop pushing for change.

Here, the case of X is illustrative. While the microblogging platform established as Twitter has never been as popular with kids as Instagram or TikTok, until a few weeks ago, it still lived in the public imagination mostly as a social media site. Now, as users continue to use its large language model, Grok, to generate images of naked children, it has become something else – best described as follows in a recent headline in the Financial Times: “X, the deepfake porn site formerly known as Twitter.”

Related Posts

Article Topics

 |   |   | 

Latest Biometrics News

 

Biometrics projects scale to meet great expectations, from borders to payments

Biometrics projects are graduating to production, reaching scale milestones and expanding dramatically in the top stories of the week on…

 

ICE using data and probability to decide where to detain and arrest people

U.S. Immigration and Customs Enforcement’s Enhanced Leads Identification & Targeting for Enforcement (ELITE) tool is being used to identify “targets”…

 

In AI era, identity is about governance, Microblink’s Hartley Thompson tells BU Podcast

“One of the defining things in my life is change,” says Hartley Thompson of Microblink. “How do you react to…

 

CLR Labs wins funding to support biometrics, IAD, digital wallet standardization

Cabinet Louis Reynaud (CLR Labs) has won funding from a French government program to support its standardization efforts in biometrics,…

 

Checkr crossed $800M gross in 2025 as biometric background checks expand

Biometric background check provider Checkr is celebrating 2025 as its most successful year ever, with gross revenue surpassing $800 million…

 

Identity and risk infrastructure startup secures $12M for Europe, LATAM expansion

Monnai, which provides identity and risk data infrastructure, has announced a 12 million dollar equity funding round led by Motive…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events