Age assurance may do a lot of things, some unintended, report argues
Age assurance technology is becoming a hot topic, prompting observers to weigh in on its effectiveness and potential challenges.
The Open Technology Institute notes in a report published by liberal think tank New America that more than 60 bills involving online age restrictions were introduced in the United States during 2023, and the trend has continued this year. If passed, these laws are likely to be challenged, and in significant number declared unconstitutional, according to the report.
The 43-page report “Age Verification: The Complicated Effort to Protect Youth Online” delves into current terminology and practices, state and federal efforts, legal, technical and social implementation challenges, social media features and recommendations for minimizing harms from age assurance from an American perspective.
“As of this report’s publication, strict age verification—confirming a user’s age without requiring additional personal identifiable information (PII) —is not technically feasible in a manner that respects users’ rights, privacy, and security,” the report says. It refers to a 2022 report from French regulator CNIL, but neglects to mention that CNIL has since proposed its own age assurance method, which it says is highly privacy-protective. The report also refers to euCONSENT pilots and a forthcoming NIST report, the positive results of which were previewed earlier this month.
Important considerations around the Constitutionality of restricting access to content, data privacy and security risks and determining scope are raised by the report.
However, the word “may” does a lot of the heavy lifting in the report to introduce nuanced issues without details important to understanding how they play out in practice. The section on biometric facial age estimation notes the application of Yoti and partners to the FTC to approve the tech, which was denied pending further information.
“Meanwhile, opponents of the method raised concerns regarding privacy and accuracy for determining specific ages rather than age ranges—as well as determining the ages of people of color and transgender, nonbinary, and disabled people, who may be disproportionately subject to false negatives or positives.”
The point is supported with a reference to feedback provided by the Center for Democracy & Technology to the FTC.
“To be sure, the applicants note that particularly the latter difference is relatively small,” that feedback says. “If the Commission agrees that the differences shown are not sufficient to indicate bias, it should still provide guidance on when such differences would become material and indicate bias in other implementations of this methodology.”
Similarly, in the section on social media, the report authors write that “operators who choose to verify ages through estimation or inference models may increase surveillance and monitoring of users’ online activity, such as their content, engagement, social networks, geographic location, screen time, linked accounts, and browsing history.”
Given that this method has not been proposed by any facial age estimation provider, regulator or major online platform worldwide, the function of the word “may” here seems to be to associate the implications of the most invasive form of age inference and estimation on offer with facial age estimation, without reference to evidence.
Also damaging to the argument is the implication that a better way to protect young people’s privacy online is to give them unfettered access to social media platforms like Meta and TikTok.
The report makes a series of claims about the ease with which age assurance systems around the world are bypassed, which mostly do not apply to the methods approved by regulators, or clash with the text of the legislation.
“Despite the efforts of legislators and online operators, users can still use tools like virtual private networks (VPNs) to bypass age verification,” the report states.
AVPA Executive Director Iain Corby calls this “the VPN fallacy,” saying during a panel discussion in late-2023: “I’ve never seen a piece of legislation that says, ‘we wish to protect children in this particular U.S. state (unless they use your VPN, in which case it’s okay).’ That’s not how the legislation is written.”
If legislators have included enforcement mechanisms with their rules, this issue is neither technically nor socially challenging.
Ultimately, the OTI makes five recommendations to state and federal legislators. Lawmakers should consider alternative ways of keeping young people safe online. User privacy and choice should be designed into age assurance systems, through data minimization and standardized third-party facilitation. Greater transparency and control over user experience should be provided, and the certainty of unintended consequences to vulnerable communities from restricting access to content should be recognized. Lawmakers should also invest in cross-sector research and collaboration to standardize best practices.
“We agree with this report’s key conclusion that privacy-respecting age verification is possible via the use of existing and well-understood cryptographic principles e.g. privacy enhancing technologies (PETs),” Corby wrote to Biometric Update in an email responding to the report.
“We announced earlier this month that the euCONSENT project will be deploying device-based, tokenized, interoperable and re-usable age assurance backed by new industry standards, as called for by this study’s authors.”
TikTok takedown draws accidental physical-world parallel
A takedown of TikTok Lite by Gaming Deputy is less focused on age assurance, but claims that the social media platform’s age assurance provided by Yoti “is actually easily circumvented (a selfie of someone else, for example).”
Yoti CEO Robin Tombs writes in a LinkedIn post that between the presentation attack detection, injection attack detection and authentication technologies provided by the company, Gaming Deputy’s assessment is not accurate.
Tombs points out that Yoti offers a feature enabling one-to-one biometric facial authentication to ensure that users are not circumventing age checks by submitting a selfie from an older person acting as a proxy.
The more important point, however, is that proxy bypasses are a familiar problem in the physical world.
“Just because proxy sales are possible, regulators do not tell supermarkets not to bother checking alcohol sales to children,” Tombs points out.
This post was updated at 12:34pm Eastern on April 25, 2024 to include a comment from Iain Corby.
Article Topics
age verification | data privacy | face biometrics | legislation | Open Technology Institute | selfie biometrics | Yoti
Comments