AVPA flags AI, distrust in government and big tech as hurdles for age verification
On February 27 at 10am EST, Biometric Update will host a webinar entitled Age Verification: Lessons Learned from the UK, to delve into the increasingly complex question of how to facilitate proof-of-age online when taking into account powerful threats to data security. UK policy debates about biometric age verification technology are at full simmer, as evidenced by lively discussions at the Westminster eForum policy conference, which took place on February 8. Experts from government, academia and the private sector gathered to stew over questions about the regulation, legislation, and societal consequences of emergent AI-assisted technology for online identity verification, and beyond.
The half-day event hosted detailed discussions of the UK Online Safety Act, as well as an industry panel on developing online safety to meet emerging, future and global threats, featuring Iain Corby, executive director of the Age Verification Providers Association (AVPA).
AVPA comprises 25 member firms working in age assurance, which includes both age verification and age estimation (through both facial recognition and voice). In speaking on emerging threats and how innovations in AI and age verification technology can help formulate an effective response, Corby says that while government regulators are mounting a noble effort to keep pace, the biggest challenge may be convincing the public that their data and privacy are secure.
New moves from Apple, Google, CNIL show private and government investment
“The whole essence of age verification,” says Corby, “the reason our whole industry exists, is to prove your age online without disclosing your identity.” He says the simplest way to do that is still through a trusted third party. But he points to a cryptography initiative by the French data protection authority CNIL, which provides a “double-blind solution,” as an example of a new direction in thinking about age verification and data security.
In the UK, Corby says online age verification through government digital ID based on the UK digital identity and attributes trust framework continues to face hurdles in development and public trust. And while big tech is increasingly involved in digital ID verification – Corby notes the recent certification by the Age Check Certification Scheme of Google’s age estimation tech, and that “Apple has moved heavily into the mobile driver’s license market in the U.S.” – there are also outstanding trust issues around what tech titans are doing with public data, to say the least.
AI amplifies threat to effective biometric identity and age verification
Like everyone, AVPA is concerned about the disruptive potential of generative AI and deepfakes. “AI is now generating extremely good fake documents,” says Corby, “and of course anybody who’s watched kids playing around on Snapchat can see that they can use a filter to make themselves look twenty years older.”
Corby, however, is relatively sanguine about the threat, or at least confident in the work being done in response. He says AVPA is working with Swiss partners, including the Idiap Research Institute, on a project funded by the Swiss and UK governments to develop defenses against sophisticated AI attacks. “What we’re effectively going to do,” he says, “is use AI to catch AI. It will be a bit of a cat-and-mouse race. But I wouldn’t want anyone to think that, as an industry, we’re unable to cope with that.”
Coby mentions one tangential threat in the age verification space that points to some of the wider social issues that will unfold as synthetic people become more common: mass job losses in the adult entertainment industry. Quoting a contact in the space, Corby says that within five years, many performers will have been replaced by synthetic models generated by AI. “Which then presents the problem, well how do you confirm the age of an AI model? So in many ways, we’ll have to move back toward age estimation tools, to estimate how old this synthetic person looks.”
EDRi data experts decry age verification tools ‘a sledgehammer’
An opinion piece in Euronews argues that the time for bigger, more strategic thinking about what AI means for society is now – and that many current age verification tools are making things worse. “The sledgehammer approach of age verification tools won’t make the internet safer,” reads the headline on the piece by policy and communications officers with European Digital Rights (EDRi).
Focusing its ire on language in Ireland’s Online Safety Code that would impose age verification restrictions for social media use, the piece claims age verification would limit internet access for young people to whom it is a fundamental aspect of life, and discourage democratic participation.
“A 2023 survey showed that 56 percent of young people in Europe consider their anonymity crucial for their activism and for organizing politically among peers,” says the piece. “Age verification tools rely on harmful mass data gathering that threatens the privacy and security of everyone.” The authors argue that, with many companies under Irish jurisdiction, the decision will be felt across Europe. “With this new binding code, the Irish media regulator is preparing to force many large tech companies to process this sensitive data on a huge scale, to predict people’s ages.”
Noting related legislation in the UK, Spain, Italy and Belgium, the authors see a creeping trend that they say leads to a slippery slope, despite “a mountain of evidence showing that large tech companies and states cannot be trusted with handling people’s most private data and with looking after their digital safety.”
Questions of public trust tied to financial interests
The piece ultimately calls for “a holistic and careful approach to online safety” that includes alternative options for age access control such as puzzles for verification or children’s accounts – and, explicitly, for political and legal accountability in a market set to be worth €4 billion by 2028. “The EU already has strong privacy and data protection laws, which most current age verification methods fail to respect,“ they write. “Lawmakers must not ignore the clear financial interests of those developing and selling these tools and the conflict of interest they might create for stakeholders involved in the political debate.”
As recent hearings on the effect of major social media networks on youth have demonstrated, lawmakers would be wise to heed that warning.