FB pixel

TikTok rolls out Yoti FAE across Europe as social media debate rages

Categories Age Assurance  |  Biometrics News
TikTok rolls out Yoti FAE across Europe as social media debate rages
 

TikTok is introducing age assurance across Europe in response to regulatory pressure. The ByteDance-owned platform’s age assurance methods include facial age estimation from Yoti, along with credit card data and scans of government-issued ID.

By policy, TikTok does not allow people under the age of 13 to create accounts. New registrants are asked their birth date, and the company subsequently analyzes “signals” to detect indications an account may belong to someone who does not meet the age requirement, according to the company announcement. TikTok also says its content moderation teams are trained on how to recognize signs that a user is under 13, and other people, with or without a TikTok account, can report suspected underage accounts.

These methods already result in the removal of about 6 million accounts per month, the company says. If a user wants to appeal a judgement about their age, they must use one of the age assurance methods above.

The rollout follows a pilot of age checks which Reuters reports was carried out in the UK and resulted in the removal of thousands of accounts.

TikTok’s new rules apply across the European Economic Area, plus Switzerland and the UK.

Yoti also provides FAE for Meta’s Instagram and Facebook, among other social media platforms.

Social media age check requirements going viral

Pressure from regulators continues to grow, and Privately SA counts more than 40 countries now restricting or considering restrictions for social media based on age. CNBC notes that the UK House of Lords is expected to vote this week on an amendment to the Children’s Wellbeing and Schools Bill which would bring in age checks for social media.

The facial age estimation provider commissioned a survey of consumers to learn their reaction to this wave of regulation, and found that only 13 percent of adults trust online platforms to protect facial images or other forms of biometric data. When asked if they would accept facial age estimation carried out entirely on-device, three times as many people (39 percent) expressed support.

“The debate has moved from ‘should platforms verify age?’ to ‘how do they do it?’ and we’re seeing a rapid shift toward enforceable age controls that provide data privacy guarantees,” says Deepak Tewari, CEO of Privately SA in an announcement revealing the survey results. “With so many countries now actively regulating or reviewing children’s access to social media, it makes reliable age assurance unavoidable. Facial Age Estimation technology (FAE) allows platforms to meet these requirements without asking users to share IDs, which is critical for both privacy and scale adoption.”

Privately completed 5 million on-device age checks in 2025, and serves three of the ten largest social media platforms operating in Australia, according to the company announcement.

A dangerous virus (the bans, that is)

“Social media bans are dangerous,” CEPA (the U.S.-based Center for European Policy Analysis) says in an article arguing that such age restrictions are a mistake.

In support of this assertion, it offers several question-begging premises: “If platforms must prove age and identity, it endangers everybody’s privacy. An account ban is easy to announce and hard to enforce. Platforms have to decide what counts as ‘reasonable’ proof of age, how often to re-check, and how to investigate without locking out legitimate users or collecting sensitive data,” CEPA Tech Policy Program Senior Researcher Dr. Anda Bologa writes. No figures or examples are offered. The technology on offer, or its use dating back several years, are not mentioned.

“Teenagers can migrate to smaller apps, borrow credentials, or stay logged out, shifting the risk rather than reducing it,” Bologa writes.

Age verification at scale is “intrusive, error-prone, and expensive.”

The article asserts that age assurance compliance requirements give larger companies an advantage over smaller ones and that teenagers will inevitably flock to smaller, less regulated corners of the internet. The compliance reports and risk assessments of the EU’s Digital Services Act, “without building a dangerous permanent verification regime,” is preferable, Bologa argues.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events