FB pixel

How to deploy responsible age checks at scale? Big Tech titans give different answers

Google, Apple, Meta, Yoti weigh in on where, when and how to do age assurance 
Categories Age Assurance  |  Biometrics News
How to deploy responsible age checks at scale? Big Tech titans give different answers
 

One of the more remarkable contrasts at this week’s Federal Trade Commission (FTC) panel on age verification comes in a familiar format: regulator versus Big Tech. In this case, the regulator in question is FTC Commissioner Mark Meador, specifically in his prepared remarks, which take an uncharacteristically stern tone with regard to social media.

Meador begins his speech by interrogating the now-common term “digital natives.” That phrase, he says, implies an ambient transformation of the natural state of human existence, rather than an engineered result.

“But, of course, someone is responsible,” he says. “The online world in which my children, and many of your children, are growing up is a world profoundly shaped by the decisions of powerful people in high places. For the last two decades or so, these same people have been running an elaborate set of economic, psychometric, and socio-emotional experiments on America’s young people.” As such, he says, a more appropriate term is “digital subjects.”

What makes the contrast with Big Tech remarkable is that the companies running those experiments were in the room to hear Medor deliver his accusations in person. Panelists for the day’s session on how to deploy responsible age verification at scale include representatives from Meta, Google and Apple – all implicated in Meadow’s broadside.

The panel is rounded out by speakers from the The App Association, youth advertising firm Superawesome, and the CEO of UK age assurance and digital ID provider Yoti, Robin Tombs.

The key term, which straddles the two presentations, is “responsible,” in that it not only implies stewardship, but also legal liability – the thing Big Tech is desperate to avoid.

Google: data mining for algorithmic age assurance

Google’s take on responsible age assurance at scale is that it “is working to be part of the solution.” The company’s child safety policy manager, Emily Cashman Kirstein, says that the company wants to ensure kids get the best experiences from their products (which include YouTube) and has, predictably for Silicon Valley, based its solution on AI.

“We have fully rolled out an age inference model in the United States, which we’ll be rolling out globally, that helps to determine if the user is an adult or not,” Kirstein says. “We’ve heard about the promise of using machine learning and AI for this purpose, and that’s exactly what we’re doing at Google.”

The model she’s referring to has raised concerns about data privacy, and they would seem to be warranted. “Our age inference model takes information we know about our users without collecting additional data and works to confirm whether or not that user is an adult or not,” Kirstein says. Decisions are “based on factors like, how long has a user had their account? If this person has had their account for 20 years, they’re probably an adult. Is that person depending on their privacy settings? Is that person searching for tax assistance on search? Are they looking for how-to plumbing videos on YouTube?”

Google’s response to the question of age assurance at scale tells us two things. One, social media companies certainly already have the technology to know how old their users are. Two, its primary model is to simply harvest more user data.

That said, the company also allows for alternative methods for age checks, such as document verification. Kierstam notes that it has launched APIs for app developers and websites to receive age information through a zero knowledge proof (ZKP) pipeline. Overall, the company wants to stay on top of rapid change.

“It’s so important to keep an open mind to see where the elements like privacy capabilities, precision improvements will take us going forward,” she says. “This is very much an ongoing conversation for us.”

Meta: reluctant compliance, relentless litigation

Meta’s VP and Global Head of Safety Antigone Davis says the company has “taken a comprehensive approach to ensuring that teens have an age appropriate experience online.” She touts the company’s Teen Accounts, “which have built-in protections that are offered specifically for 13-to-15 year olds, but also for 16-17 year olds, and they change across those age boundaries.” This is a variant on a model that looks increasingly necessary for age assurance at scale, at least in the case of social media: a tiered system that curates content, rather than a binary yes/no threshold for access. Davis says Meta has also been integrating machine learning tools for age inference, which could facilitate this.

Perhaps Meta’s most convincing step toward a full embrace of age assurance technology is its partnership in k-ID’s OpenAge initiative, which leverages FIDO passkey technology for its AgeKey system. However, its endorsement does not come without a caveat. “We think this is a very promising piece of technology,” Davis says, “although it still puts parents in the position of having to do this or teens in the position to do this across numerous apps.”

Which brings her to Meta’s actual position on how to deploy responsible age assurance at scale: make someone else do it. Davis says Meta is pushing for “a proactive a piece of legislation that would essentially put in place an approach at which at the app store you would be able to collect both a parental approval and an age from the minor.”

“And we think the most effective way to do this is to really to basically have a simple process: when a parent gets them their smartphone, the parent can easily go into their Apple account, their Google account or their other account, confirm they’re the parent or guardian, give the teen’s age that can be passed to us in a privacy preserving way, to ensure that we can provide those age appropriate experiences.”

Meta’s opinions on age assurance must be taken with a heavy salting as long as it is deploying Silicon Valley’s legal lobby, NetChoice, to litigate every age check law on the state level. Davis repeats the talking point that says age assurance at the platform level will mean kids or parents doing an age check for every app, as though age laws are targeting everything on the internet, rather than five to ten massive companies. All of it is a variant on the company’s trademark theme: we’ll follow the law but fight it, while we pitch alternative solutions that work in our interest.

Apple: do you not see what we’ve done here?

Apple has avoided the bulk of online safety regulators’ wrath, which has largely targeted pornography and social media – neither of which Apple deals in. However, in part because of Meta’s lobbying, legislators have begun to explore laws that put age assurance requirements at the app store level. One of the first is Utah’s App Store Accountability Act, which Meta has publicly endorsed.

Apple’s panelist is Nick Rossi, the company’s director of federal government affairs (and a 25-year veteran of the U.S. federal public service). He says the app store is already a “safe and trusted platform for users to discover millions of apps.”

“But at the same time, we’ve also created a suite of tools and features to help keep kids safe. That includes tools that allow parents to approve or disapprove of any app download or in-app purchase – to set app specific time limits or to control who can start a conversation with their kids.” And, he says, “we’ve rolled out within this last year a privacy protective age assurance solution that gives kids and parents the ability to share kids age ranges with developers for the purpose of providing them with safe and age appropriate features and content, but only with the approval of parents.”

So, a tiered system which operates at the app store level. In brief, Apple says: what more do you want from us?

Yoti: we’ve been doing this for ten years

None of Google, Meta or Apple are speaking entirely in good faith: Silicon Valley has proven itself generally hostile to any attempt to shrink, temper or diminish its influence and reach. In certain respects, age checks are fundamentally inconvenient to their mission of endless growth.

For Yoti, on the other hand, age checks are part of its core business, and it has been working at the center of the industry for more than a decade. “We’ve done over 1 billion facial age estimations over the last seven years and about 1.1 billion age checks in total,” says Robin Tombs, CEO.

“We do those in lots and lots of sectors – particularly social media, gaming, adult sites, vaping, e-commerce, supermarkets, self-checkouts and a few other areas like gambling machines.

“So we have quite a lot of understanding of how to help businesses comply in the age sector and we’ve seen how that’s changed over the last few years as technology has improved and more centers and regulations have been introduced and all of the challenges that that has brought.”

Tombs, in effect, has been in the age verification trenches, and knows how legislation and regulation play out on the ground. The complexity of international regulatory regimes is one thing for companies trying to comply – but many, he says, aren’t even sure how.

“There’s lots and lots of sites which are not really sure, how do I test to ensure that facial age estimation is accurate and not biased across skin tones and ages and sexes?”

“Now that’s changed in the last two two and a half years, particularly with the U.S. National Institute of Standards and Technology. We now do a huge amount of testing of vendors. So there are benefits now coming through, that businesses without expertise can rely on independent testing to ensure that they pick vendors who are hopefully offering good services.”

Yoti’s position, at least for most businesses that aren’t among the world’s largest, is that deploying responsible age assurance at scale is simple: hire a trusted vendor with a proven track record.

App Association: won’t someone consider the swine?

From the melee of opinion, promises and deflection, an unlikely star rises to capture the hearts and minds of the panelists. It comes to the party via Graham Dufault, general counsel for the App Association, who is there to argue that making app stores do age assurance puts the burden on apps that don’t fit into the targeted categories, and don’t present any real risk to children.

“So much of the app ecosystem and so much of the digital ecosystem in general is business to business,” Dufault says. In that scenario, age assurance “has never really been a good fit, because it presents a risk without having that commensurate need to address an age related risk or need to provide an age related benefit.”

This sets up the surprise reveal: “One of the examples I always think of, in terms of companies that don’t fall into that category of firms that really need access to age verification, is Swine Tech. It’s a provider of a software tool that helps pig farmers manage their farms, right? It’s distributed on Android. And it doesn’t present those age-related risks.”

In the end, then, the question of deploying age assurance at scale boils down to the swine farmer. In saving the children, will we doom the pigs, when Farmer Joe refuses to perform age verification to download his app? It comes back to the question of responsibility: how many people and things will online safety measures disrupt? Who will speak for the pig farmer? And who will the lawyers call when the question of liability arises?

Meador: ‘It is efficient. It is secure. And it is the future’

The last word goes to Commissioner Mark Meador, who addresses the criticism that age assurance has to make adults’ lives harder. “We hear that we won’t be able to download basic apps, like calculators, without having to submit to an onerous age check,” he says.

But this is not so.

“There’s no reason this process needs to be cumbersome and messy, or invasive. When I look at the landscape of age verification technologies today, I have to say – I’m incredibly impressed with what entrepreneurs are coming up with. Just as policymakers have grown more interested in these measures as ways to keep kids safer online, the market has responded to do what it does at its best – meet the needs of the moment in efficient and sophisticated ways. With the new age verification systems that are emerging, you don’t need to hand over your personal data, or your child’s personal data, to a company you might not trust. Instead, these systems rely on third-party providers who keep that data secure. Third parties who can contract with social media companies, or other online service providers, to simply verify whether a user is old enough to access a product, without turning over any raw personal data. This is elegant. It is efficient. It is secure. And it is the future.”

More coverage from FTC Age Verification Workshop

FTC workshop shows age assurance sector positioned to support legislative trend in US

FTC panel gets existential in pondering why online age verification matters

Age assurance policy landscape sees different camps adopt different positions

How to deploy responsible age checks at scale? Big Tech titans give different answers

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events