YouTube, Meta lean into age assurance in 2025

In the past twelve months, age assurance for online content – a method for knowing that a user is of legal age to access a digital site, app, platform or service – has gone from a relatively niche issue to a priority agenda item for governments, activists and the biggest names in tech.
With a pivotal Texas case currently before the U.S. Supreme Court, a major trial of age verification and age estimation technologies ongoing in Australia, and a flurry of regulatory activity, from new Ofcom guidelines in the UK to work on an international standard, age assurance has become a fulcrum of larger global debates over rights, freedoms and mental health.
Google to use machine learning for age estimation
In an update this week, Google announced that it will be deploying machine learning algorithms to estimate the age of YouTube users. The company says that, as YouTibe replaces television (or becomes television) as the primary medium for consuming video content, it is “laser focused on protecting our youngest users.”
“That’s why we’ll use machine learning in 2025 to help us estimate a user’s age – distinguishing between younger viewers and adults – to help provide the best and most age appropriate experiences and protections.” Testing will begin in the U.S. and expand to more countries over time.
According to a report in The Verge, YouTube’s age estimation model will “use existing data about users, including the sites they visit, what kinds of videos they watch on YouTube, and how long they’ve had an account to determine their age.”
If Google’s age estimation tool detects that a user is under 18, it will notify them of a change to settings and prompt them for age verification.
Addictive social media algorithms begin showing up on regulators’ radar
YouTube is something of an edge case in the social media debate; because of its sheer scope in terms of users and use cases, it often gets a legislative pass. (Per YouTube, viewers are watching, on average, over one billion hours of YouTube content daily – and that’s only on televisions, which are now the primary device for YouTube viewing in the U.S.)
Legislators targeting addictive algorithms, which collect user data and process it to tailor content feeds, more often have in mind endless scrolling and swiping apps, like X or Instagram, which leverage casino-style mechanics and dopamine hits to encourage addictive behavior.
Meta is reportedly following Google in testing machine learning for age estimation. Others will likely be forced to do the same, as legislators push and polish bills making their way through state governments across the country.
A piece in Inside Investigator says Connecticut’s proposed House Bill 6857 (HB6857), “An Act Concerning the Attorney General’s Recommendations Regarding Social Media and Minors,” takes aim at what Attorney General William Tong calls “the algorithms, the machine learning, which is designed to analyze what you’re looking at and then feed you more information.”
Should the law pass, it will ban data-based algorithmic recommendations for underage users without parental consent. It will also block kids from using social media between midnight and 6 a.m., and cap use time at one hour a day.
Tong says a lot of it is up to parents. “If an individual parent decides that they want their kid to have access to algorithms, that they can handle it, they can do that, but they have to affirmatively make that decision. It can’t just be some simple click through, it has to be some reasonable verification that the parent made that decision.”
He notes that while many platforms already have policies in place restricting use to those over 13, few enforce them. And while a stipulation in HB6857 says any age gate requirements the state imposes need to be “commercially feasible,” Tong – who is among 42 other attorneys general suing Meta for deliberately creating addictive products targeting children – dimisses that in the context of the wealth social platforms have generated.
“It’s up to these companies, which make trillions of dollars every year off of all of us, to figure out how to effectively age gate, verify the age of young people, and to verify parent consent. We know that just putting a page up that says, ‘Are you 18 or not?’ and clicking ‘Yes’ or ‘No’ doesn’t do it. It’s not enough.”
Nebraska bill classifies third-party age assurance as ‘reasonable’
As in Australia, social media is finding itself being pulled into an age verification debate that began with pornography, as more and more states table age check bills targeting social platforms.
Nebraska has introduced LB383, which proposes to create the Parental Rights in Social Media Act, requiring social media companies operating in Nebraska to utilize “reasonable age verification” methods to keep users under 18 off the sites. According to Unicameral Update, digital ID or third-party age verification services are considered “reasonable,” but any commercial entity or third party would be prohibited from retaining personally identifying information after providing age assurance.
Those under 18 can still create accounts – but only if their parents undergo age verification, and submit a signed consent form.
Attorney General Mike Hilgers, who would oversee enforcement of the act, echoes his Connecticut counterpart in noting that “these are not on-accident algorithms that are just sort of inadvertently bringing in children. These are by design because some of the most lucrative customers you can find in this area are children.”
Utah bill would allow parents to sue app stories for age gate violations
Utah’s age assurance legislation is targeting app stores. The Utah News Dispatch reports on SB142, a bill that would create the App Store Accountability Act, linking a minor’s app store account to a parent’s.
The regulation would follow a cascade model, wherein users would be required to provide age assurance (for example, with a credit card) and underager users would be directed to tether their account to a guardian’s. Like Connecticut’s, it is designed to direct the matter to parents, who can then decide what their kids are allowed to do online.
Notably, a violation of the law would be classified as a “deceptive trade practice” under Utah law – opening the way for parents to bring civil action against app store providers.
Texas SCOPE Act blocked by federal judge
The wave of age assurance legislation may be rising, but legal barriers are being erected almost as quickly, as both industry associations and digital rights groups press the case that age assurance requirements violate the First Amendment.
Chron.com says a federal judge has temporarily blocked some of Texas’ SCOPE (Securing Children Online through Parental Empowerment) Act, which went into effect in September 2024 after passing through the state legislature.
A judge ruled that parts of the act are likely unconstitutional, confirming a challenge by the Foundation for Individual Rights and Expression (FIRE).
“The court determined that Texas’s law was likely unconstitutional because its provisions restricted protected speech and were so vague that it made it hard to know what was prohibited,” FIRE Chief-Counsel Bob Corn-Revere says. “States can’t block adults from engaging with legal speech in the name of protecting children, nor can they keep minors from ideas that the government deems unsuitable.”
Texas Attorney General Ken Paxton immediately appealed the decision.
Article Topics
age estimation | age verification | biometrics | children | face biometrics | Google | machine learning | Meta | social media | United States | YouTube
Comments