Legal age is porous online; digital ID firms, regulators, platforms spar for leverage

The increasing demand for age assurance tools to protect youth online is prompting new investments in biometrics and digital identity verification.
Projects receive funding to fight online child abuse and exploitation
Safe Online, which bills itself as “the only global investment vehicle dedicated to keeping children safe in the digital world,” has staked $2 million to support ten innovative projects across the sector focused on age assurance and preventing online child sexual exploitation and abuse (CSEA).
A release from Safe Online emphasizes the particular focus on the live streaming of abuse. The organization’s director, Marija Manojlovic, says the current social and regulatory approach to the issue is fragmented and reactionary. “With these ten new projects,” she says, “Safe Online is promoting a unified and proactive approach that focuses on upcoming legislation, new and emerging technology and tackling rapidly evolving threats.”
Recipients were selected based on relevance, potential scalability, strength of research, context against global and national comparables, and solid legal underpinning. Selected organizations include several names from both the private and public ends of the biometrics and digital ID spectrum. Yoti was given the nod for its Level Playing Field project aimed at expanding monitoring and secure verification capabilities in the CSEA ecosystem through digital ID, age assurance, identity verification and e-signatures. The company has firmly established itself as a leader in biometric age estimation. The EU’s euConsent project receives support for its second phase, following the deployment of a phase one pilot of “pan-European, open-system, secure and certified interoperable age verification and parental consent.” T3K Forensics is utilizing AI in its CounterACT project to detect exploitative content in livestreams, joining Web IQ, Rigr AI and Universidad de los Andes in leveraging the power of machine learning and algorithmic detection.
The list reflects the global scope of the challenge, with projects centered on the Philippines, New Zealand and Latin America. A detailed list of recipients can be found here.
UK report shows weakness of legacy age verification measures
In theory, most social media networks require users to be a minimum age before they can create an account. However, a new report from the UK’s communications regulator suggests what many parents already know: without age assurance tools that use biometrics or other ID channels to authenticate a person’s age, age verification is mostly perfunctory.
According to Ofcom, 22 percent of 8-17-year-olds with at least one social media profile have their user age set to over 18, putting them at greater risk of seeing adult content. A third of children aged 8-15 with a social media profile have a user age of at least 16.
Moreover, advanced tech that still wildly underregulated has made inroads with the young, with 79 percent of online teenagers and 40 percent of kids aged 7-12 saying they have used generative AI tools such as Chat GPT, Midjourney or – most popular with kids – Snapchat My AI, which half of UK kids between 7 and 17 reported using.
Those most concerned about sexually explicit content might take solace in knowing that the highest risks to kids are relatively less coital, with Ofcom listing swears, misinformation, “content showing dangerous stunts or online challenges,” unwanted friend requests and online gambling ads as the biggest potential digital harms facing kids.
Social media restrictions enacted and disputed across US
As U.S. states such as New Jersey, Mississippi and Utah move to tighten laws around age verification for those wishing to create a social media profile, the legal back-and-forth over legislation is likely to increase.
Certainly, examples such as the proven ways that Instagram harms teen girls and the collapse of Twitter into the shambolic X demonstrate the need for solid social media regulation around age. This is especially true as it becomes clear how little Silicon Valley’s tech impresarios care to enforce their own rules.
The Independent reports on a newly opened legal complaint by 33 U.S. states, which accuses Instagram’s parent company, Meta, of knowingly allowing and harvesting the data of underage users, and – in the words of New York Attorney General Letitia James – having “profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem.”
Meta has expressed disappointment in the suit, and will likely stick to its track record of active legal pushback against proposed protections. Presently among these is a motion filed in an Arkansas court in objection to Act 689, the so-called Social Media Safety Act.
The Arkansas Democrat Gazette reports that attorneys for the Washington tech lobbying firm NetChoice filed for summary judgment in its case against the state. Among the plaintiffs’ objections to the law are the rule requiring major social media platforms to license age verification systems from third-party digital ID vendors, and the requirement that users under 18 obtain parental permission before creating an account.
The legislation exempts companies that generate less than $100 million, as well as cloud storage platforms and sites for which social interaction is not the primary use. The state’s attorney general has vowed to “vigorously oppose the motion for summary judgment.” However, a U.S. district court judge has already indicated that the law likely violates the First Amendment.
To paraphrase the old adage, online age verification is more than just a number.
Article Topics
age verification | biometrics | digital ID | Meta | Netchoice | Ofcom | Safe Online | social media | UK | United States
Comments