New technology and regulation could make the internet grow up quickly
The negative effects of social media and inappropriate online material on youth have been documented, leading to a new urgency around age verification for social platforms. Should kids be allowed to use social media? At what age? And how will parents limit access? (Most social media platforms already have age restriction policies in place; but enforcement is another matter.) These questions are stirring up new approaches in the digital ID and security space, both commercially and politically.
In a move to offer stronger digital safeguards for underage users, the digital safety kit provided by SuperAwesome, a subsidiary of Epic Games, is about to gain facial age estimation capability, via a partnership with Yoti. A company announcement confirms that the deal will see Yoti’s age estimation technology integrated into SuperAwesome’s Kids Web Services (KWS) verification dashboard. The UK company emphasizes that its tool does not make use of facial recognition, specifying that age estimation ballparks an individual’s age without recognizing identifying characteristics.
Also important to note is that the technology is not intended to scan kids’ faces. Rather, the verifiable parental consent (VPC) system identifies parents or guardians as adults who have permission to adjust settings tied to children’s personal information. Typically, VPCs have involved authentication via credit card statements, personal ID or other sensitive personal information. The Yoti integration gives adults the option be verified by taking a selfie, which is deleted after the transaction is complete, thereby avoiding the sharing of personal information.
Lawmakers seek ways to keep kids off social media
Protecting kids on social media is not just a commercial issue: it is also an act. Specifically, the Protecting Kids on Social Media Act, a new piece of federal U.S. legislation aimed at restricting social media platforms to those 13 and older. According to CNN, the bipartisan federal bill was introduced to the Senate on April 26. It lays out policy designed to address problems such as social media addiction, privacy violations and ineffective verification methods.
“Social media companies have stumbled onto a stubborn, devastating fact,” says Brian Schatz, a Democratic senator from Hawaii who helped craft the bill. “The way to get kids to linger on the platforms and to maximize profit is to upset them — to make them outraged, to make them agitated, to make them scared, to make them vulnerable, to make them feel helpless, anxious, despondent.”
In summary, the draft bill requires that social platforms “verify the age of their users, prohibit the use of algorithmic recommendation systems on individuals under age 18, require parental or guardian consent for social media users under age 18, and prohibit users who are under age 13 from accessing social media platforms.” Under this law, minors over 13 could access social media sites such as Instagram and TikTok, but only with parental consent.
The bill does not make recommendations on which age verification technologies companies should use. But it does call for them to “take reasonable steps beyond merely requiring attestation, taking into account existing age verification technologies, to verify the age of individuals who are account holders on the platform.” It also prohibits the use of algorithmic recommendation to advertise to users under 18.
Legislation to address similar issues is also on the move at the state level, with more than 20 state legislatures proposing bills aimed at introducing age verification for social media and adult websites. However, in tandem with the introduction of new laws come concerns about privacy: the Kids Online Safety Act (KOSA), introduced in 2022, is facing resistance in the House of Representatives, as well as calls from civil groups to abandon it, citing privacy and safety concerns for marginalized communities.