FB pixel

Age assurance trial’s statement on bias called into question in data analysis

Estimation for under 16 needs to be better, but industry knows that
Age assurance trial’s statement on bias called into question in data analysis
 

Is age assurance fair for everyone? This is the question raised by a new analysis of Australia’s Age Assurance Technology Trial (AATT), conducted by the Guardian.

Or rather, it is the set-up to the Guardian’s answer – which is, not really. Data from the trial, it says, “shows that the age estimation software tested is less accurate for people with an Indigenous or south-east Asian background. This means young people from these backgrounds are more likely to be miscategorised as over the age limit, or older people categorised as underaged when they’re not.”

The analysis challenges the final published report’s assertion that data shows “no significant evidence of adverse impact or systemic disparity in error rates across skin tone groups,” and that as a result it can be called “broadly consistent across demographic groups.”

The broad allegation is that the AATT downplayed bias in the systems it evaluated, and buried the data in public information not included in the published report: “the report from the ACCS only looked at performance of age estimation software by skin tone using automated testing, and did not publish accuracy rates by demographic background, despite including the raw information in the public files.”

That said, the AATT has made efforts to be transparent, assisting the Guardian in their analysis. In comments emailed to Biometric Update, Project Director Tony Allen says the trial “worked with the journalists at the Guardian to help them to understand the data, how we had extracted aspects of it into the main report and how we summarized and presented the information in a digestible format for multiple informed and ill-informed audiences. We did not report on every aspect of what could be derived from the data.”

AATT criticism amplified by global regulatory trends

The Guardian’s analysis is in keeping with a broader interrogation of the trial and its methodology, with criticism coming from a board member who resigned in protest, and from a lobby group representing Meta, Google and other massive Silicon Valley firms.

The debate has tended to frame the trial as a dichotomy: an evaluation that says bad tech is good. But the trial has never shied away from stating that age assurance is not perfect, “no silver bullet,” a work in progress, and so on. And, while the Guardian says the trial’s data will “underpin the government’s teen social media ban,” organizers have also been very clear that it is not a statement or recommendation on government policy, but a survey of practical possibilities given the current state of the art in age verification, age estimation and age inference.

The truth would seem to be that, as online safety laws have stormed global headlines over the last nine months, the trial has come to attract a degree of media scrutiny it may not have anticipated – and may not warrant.

Those looking for answers on bias in technology are best to take up the matter with standards and evaluation bodies like NIST, which runs ongoing tests of age estimation algorithms. Those looking for answers on government policy are best directed to the Office of the eSafety Commissioner Julie Inman-Grant, a major driver of the regulation that will restrict social media accounts to those over 15 years of age.

The tech is central to the conversation, and the trial’s data is one of the first major reference points for its effectiveness. It says private and effective age assurance is possible, and lays out what it found among participating vendors – some of which are deemed to be more market-ready than others. It was never to be a perfect or complete summary of age assurance tech, but rather another progress point in efforts to evaluate and regulate it, as governments move to apply physical-world age rules (e.g. no nudie magazines until you’re 18) to the online world.

Standards, more

Further developments are forthcoming. The ISO/IEC standard for age assurance systems, 27566-1, is in the approval phase and will soon be published. The Guardian analysis will enable age assurance vendors to hone in on problem areas and work on improving the tech; larger and more diverse datasets will be necessary to address accuracy issues for those with darker skin tones.

This is not news to the biometrics industry, which does what it must to sell its products, but understands that the project of refining algorithms to improve accuracy is ongoing. The AATT’s Tony Allen says he welcomes the Guardian’s analysis: “part of the reason that we wanted to be as transparent as possible with the data was to enable and encourage additional academic and journalistic analysis of the detail of that data – perhaps bringing fresh and interesting angles and insights.”

A spokesperson for the eSafety Commissioner confirms that they are not expecting a panacea.  “The Department’s independent trial run by the Age Check Certification Scheme conducted some important testing producing independent evaluation results for a range of technologies.  Improvement of all age assurance tools, including classifiers and facial age estimation require consistent training and retraining to ensure improvement and accuracy. This is important when it comes to homing in on the 13-15 age range and when it comes to better identifying the broad range of ethnicities reflected in Australia.”

As to the question of social media, and whether some 17 year olds might end up having to prove their age to access social platforms like X and Facebook, the conversation is much larger than simply, “does age estimation tech work for everyone all the time?” As ownership of the world’s largest social media platforms consolidates in a small group of billionaires who have demonstrated absolute willingness to defend their platforms at any cost, and to use them as vehicles for misinformation and divisive speech, we might be better off asking whether social media has become a corrosive driver of social discord and political overreach – and whether anyone should be using it at all, never mind their age.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP Connect 2026 targets sustainability, practical use cases for digital ID

Sustainability and expansion are high priorities for the future of open-source national ID systems in discussions on the first day…

 

Ring Super Bowl ad sparks backlash over AI camera surveillance

A Super Bowl commercial for Amazon’s Ring doorbell camera triggered swift backlash, with critics arguing that the company used an…

 

Kids Off Social Media Act gains House backing as Senate advances bill

A bipartisan coalition of lawmakers led by Rep. Anna Paulina Luna have introduced the House companion to S.278, the Kids Off…

 

With NATO experiment, Reality Defender exposes military’s deepfake weakness

New content from deepfake detection firm Reality Defender looks at the company’s role in supporting NATO’s cognitive warfare experimentation. “In…

 

Corsound AI, IngenID partnership unites biometric voice intelligence offerings

A new strategic partnership brings together IngenID, which provides voice biometric SaaS, and Corsound AI, a voice intelligence and identity…

 

UK digital ID sector warns of legal action if mDL limited to GOV.UK Wallet

A spat is brewing in the UK between private sector digital identity providers and a government they fear is intent…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events