Age verification debate rages from New York to Washington
Iain Corby of the Age Verification Providers Association (AVPA) took the floor in New York at a Philippines-led UN hearing to make the case that “age verification is convenient, accurate, cheap, privacy-preserving, secure, effective, accessible, inclusive and enforceable.”
Age verification laws are expanding across the globe, Corby says. “We’re seeing it in Australia where they’re going to trial age verification, the European Digital Services act requires it and they’ve already got a video sharing platforms directive which requires it, and in the UK there’s an Online Safety Act. And we’ve heard about many U.S. states looking at this. The Canadian government is debating it, as well.”
“So, there are plenty of governments around the world who’ve realized it’s possible to do this.”
Corby is never shy with the facts, and when it comes to age estimation, he explains that the system of taking a selfie which is analyzed by AI to ballpark a person’s age, it is “accurate to within an an average error of about a year and a half for the age groups that we’re worried about.”
Standards are in place to certify accuracy. A recent UK impact assessment estimated the average cost of an age check at twelve cents, and expects that cost to come down. The market offers a variety of methods to choose from. And besides, most of what’s been tried so far simply hasn’t worked to stop kids from viewing pornography.
Corby says AVPA believes “the people who need to take the lead responsibility for this are those who are publishing it. They are the closest to the material. They are the ones who can tell what is harmful and what is not and they are the ones who should be responsible for protecting their users.”
They should, in short, be using age verification.
Bezos-owned Washington Post runs hand-wringing article about data collection
One can imagine a thoroughly unimpressed Corby opening the July 7 edition of The Washington Post, which includes a lengthy article on age verification that names a few of AVPA’s members.
“Companies such as Yoti, Incode and VerifyMyAge increasingly work as digital gatekeepers, asking users to record a live ‘video selfie’ on their phone or webcam, often while holding up a government ID, so the AI can assess whether they’re old enough to enter,” says the piece by Drew Harwell. “While the systems are promoted for safeguarding kids, they can only work by inspecting everyone – surveying faces, driver’s licenses and other sensitive data in vast quantities.”
The piece lays out familiar arguments: the tech could end up blocking people who are of age, violating constitutional rights. It could be used for censorship. It could be discriminatory. And what about VPNs? Will kids just migrate to the grimy underworld of the Dark Web? What are these companies doing with all our kids’ data?
Yet, while it leans into the narrative of rogue tech firms and squeezes some juice from the case of people with disabilities, the article concedes of age verification tools that “for the most part, these systems have worked.”
Furthermore, it cites a Pew Research study from 2023 showing that more than 70 percent of U.S. adults (and 56 percent of teens) say they support age verification for social media.
Concerns expressed by some in the article reflect the issue of education and public communications; despite Iain Corby’s heroic efforts, many are simply afraid of language like “facial age estimation technology.” And many parents aren’t keen on the idea of their kids’ faces being used to train AI models – for which it is hard to blame them.
Age verification providers respond to poorly chosen language
Unsurprisingly, some of the providers that have found themselves in WaPo’s spotlight have published comments in response.
On LinkedIn, Robin Tombs laments the article’s failure to note that, for facial age estimation, “all faces are instantly deleted & no ID docs or names, DoBs, addresses, mobiles phone numbers or credit card numbers are submitted.”
“Some of the language chosen is poor,” waites Tombs; “for instance claiming faces are ‘surveyed’. Surveillance involves identifying individuals but privacy preserving FAE does not identify any faces. It’s not facial recognition.”
Finally, Tombs says, the piece is shortsighted, not to say a bit alarmist. “In a few years time, well regulated, privacy preserving FAE will keep hundreds of millions of children off adult oriented sites that most adults think children should not access, and some of this article will look very dated.”
As of publication, neither Incode nor VerifyMy had issued statements.
Article Topics
age verification | AVPA | data privacy | face biometrics | facial analysis | selfie biometrics | Yoti
Comments