FB pixel

Who would you rather trust with your data – Facebook or Yoti?

Objections to privacy risks in age assurance tech serve social media’s agenda
Who would you rather trust with your data – Facebook or Yoti?
 

Vendors, academics and observers of all sides are weighing in on the publication of the final report for Australia’s Age Assurance Technology Trial (AATT). They range from validation to clarification to stern objection.

The Science Media Centre (SMC) has collected a selection of comments from experts, most of which fall into the objection category. Broadly, they reflect a deep distrust in the notion of online age assurance as a whole, and an antipathy toward unnecessary data collection. Yet many are written in explicit defense of social media platforms, which exist (at least on a corporate level) to collect and monetize user data.

As such, they illustrate a strange tension at the heart of Australia’s (and the world’s) debate over age assurance requirements for social media: providers of online age check products are being preemptively classified as worse stewards of personal data than the giant corporations that have built their empires on hoovering up data on an unprecedented scale.

Scope creep from unnecessary data collection still a sore point

Dr. Hassan Asghar, a senior lecturer in Computer Science at Macquarie University, notes that document-based verification is already familiar from select government services. But “when we expand this to social media platforms, we’re suddenly asking people to share their most sensitive documents with many more companies. Even though these companies aren’t supposed to keep our documents after checking them, it’s really hard to verify whether their processes are actually secure enough to properly delete everything once they’ve confirmed our age.”

Daniel Angus – a professor of Digital Communication in the QUT School of Communication, director of QUT’s Digital Media Research Centre and chief investigator in the ARC Centre of Excellence for Automated Decision Making and Society – takes issue with the admission that “unnecessary data retention may occur in apparent anticipation of future regulatory needs,” which he calls “an open door to scope creep and privacy risks for all Australians.” He also laments error rates that show the best performing systems posting false negative and positive rates around 3 percent. “In real terms, that means tens of thousands of legitimate Australian users would be wrongly locked out of digital services. The report never grapples with these consequences, reducing them instead to abstract accuracy figures.”

Critics can’t decide if social media is harmful or necessary

In her argument, Dr. Dana McKay, a senior lecturer in innovative interactive technologies at RMIT University, manages to assert that “there are many good reasons people, including children, may wish to hide their real identity from social media companies and those they interact with online, including data stewardship practices of social media companies” – and, a few sentences later, to lionize social platforms as a key resource for LGBTQ+ kids, claiming that “those who most need the external support and connection offered by social media are also those most likely to be denied it by these mechanisms.”

Dr. Jake Renzella, head of the Computing and Education Research Group, director of studies (Computer Science) and director of Digital Infrastructure Strategy at the University of New South Wales, likewise has some suspicion to drum up. “The fundamental challenge here isn’t the technology, but the new risks we introduce by outsourcing this critical function. The report proposes adding dozens of third-party providers into the process, each becoming a potential point of failure for data security.”

The voices in opposition echo the criticisms of John Pane, chair of the Electronic Frontiers Foundation (EFF), who resigned his position on the stakeholder advisory board of the AATT over concerns it hadn’t adequately defined its terms. “When I actually asked the technology trial, what do you mean by private? What do you mean by effective? What do you mean by efficient? No one can give a good explanation.”

‘Kids, let me tell you about a time before social media…’

Not everyone has their pitchforks out for age assurance providers. Dr. Belinda Barnet, a senior lecturer in Media at Swinburne University of Technology, says that, “as expected, the report found that there were some privacy and security concerns with several of the methods, but that there are third-party verification providers who could deliver age assurance without unnecessarily storing our data. I would personally like us to adopt the reliable third party method rather than giving Facebook our passports.”

Many of the commentators argue that the so-called ban for users under 16 is a band-aid solution, and that it doesn’t target the core harms that social platforms present –  i.e., “real safety depends on tackling harmful content at the source.”

There are high-level questions that must be asked if the debate is to be had in good faith. Why do we trust the age assurance sector less than massive social platforms, run by some of the world’s richest men, which have had demonstrably negative effects on our media and political environments? Why are Mark Zuckerberg and Elon Musk considered more worthy of defense than an executive like Yoti’s Robin Tombs? And why has social media been classified as a necessity for the survival of certain vulnerable groups?

For years, Silicon Valley has successfully sold the narrative that social media is something we can’t live without. Privacy watchdogs are now coming to its defense against regulators and providers who have voluntarily subjected themselves to an independent third-party evaluation. Meanwhile, we may recall that there has never been a public social media tech trial on the scale of Australia’s effort – unless one counts Zuckerberg’s testimony to U.S. Congress, in which he apologized to parents of children who died after experiencing sexual exploitation or harassment on social media.

All or nothing at all: demanding perfection in age assurance is folly

A particular bit of flawed thinking is encapsulated in a statement from Tama Leaver, a professor of Internet Studies at Curtin University and a chief investigator in the ARC Centre of Excellence for the Digital Child. Leaver asserts that “the technical thresholds that the trial used to determine whether a tool was viable or not seem completely at odds with the expectations of ordinary Australians online. Australians want to know if these tools work properly, and properly means work every time. The evidence in this report shows that these tools simply aren’t reliable.”

Age assurance is not a zero-sum game. The notion that a system that is not correct 100 percent of the time should be binned is nonsense; ChatGPT, which has been accused of encouraging a teen to kill themselves, was initially released into the world with little outcry from digital privacy absolutists. If we are to abandon tech because it is not perfect and poses risks, surely the chatbot will need to go – along with your car. Otherwise, the task is the same as it long has been for the mobile death machines that are automobiles: work to make things more safer over time, building public trust as you go.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Growing role of biometrics in everyday life demands urgent deepfake response

Biometrics are becoming more entrenched a couple of market segments, though not as fast as some would like. The top…

 

PNG expands mandatory digital ID to businesses taking gov’t contracts

The government of Papua New Guinea is making its national digital ID a mandatory form of authentication for all business…

 

Imply reaches face biometrics milestone at tech-forward Arena da Baixada

Imply Tecnologia’s facial recognition model has enabled more than 1 million accesses at Arena da Baixada, the home of Club…

 

Following IPO, ROC is investing in homegrown security for US market

In February, Colorado-based biometrics and vision AI provider ROC closed the first big biometrics IPO of 2026, raising just over…

 

Jumio expanding biometric reusable digital identity across LatAm

Following a launch in Brazil last year, U.S.-based Jumio is expanding its face biometrics-based reusable digital identity product, selfie.DONE, across…

 

Denmark imposes age checks to restrict social media to kids under 15

Welcome two more Europeans nations to the global age assurance legislation party. The Danish government is moving ahead with an…

Comments

One Reply to “Who would you rather trust with your data – Facebook or Yoti?”

  1. Regarding the question in the post’s title, my gut feeling is that Yoti is a much smaller company than Meta or the X empire, and thus would be in a better position to track the data.

    However, it all boils down to the assemblage of third parties that ANY vendor uses for age assurance, and how the vendor works with those third parties (and their fourth parties) to share or not share PII.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events