FB pixel

TikTok rolls out Yoti FAE across Europe as social media debate rages

Categories Age Assurance  |  Biometrics News
TikTok rolls out Yoti FAE across Europe as social media debate rages
 

TikTok is introducing age assurance across Europe in response to regulatory pressure. The ByteDance-owned platform’s age assurance methods include facial age estimation from Yoti, along with credit card data and scans of government-issued ID.

By policy, TikTok does not allow people under the age of 13 to create accounts. New registrants are asked their birth date, and the company subsequently analyzes “signals” to detect indications an account may belong to someone who does not meet the age requirement, according to the company announcement. TikTok also says its content moderation teams are trained on how to recognize signs that a user is under 13, and other people, with or without a TikTok account, can report suspected underage accounts.

These methods already result in the removal of about 6 million accounts per month, the company says. If a user wants to appeal a judgement about their age, they must use one of the age assurance methods above.

The rollout follows a pilot of age checks which Reuters reports was carried out in the UK and resulted in the removal of thousands of accounts.

TikTok’s new rules apply across the European Economic Area, plus Switzerland and the UK.

Yoti also provides FAE for Meta’s Instagram and Facebook, among other social media platforms.

Social media age check requirements going viral

Pressure from regulators continues to grow, and Privately SA counts more than 40 countries now restricting or considering restrictions for social media based on age. CNBC notes that the UK House of Lords is expected to vote this week on an amendment to the Children’s Wellbeing and Schools Bill which would bring in age checks for social media.

The facial age estimation provider commissioned a survey of consumers to learn their reaction to this wave of regulation, and found that only 13 percent of adults trust online platforms to protect facial images or other forms of biometric data. When asked if they would accept facial age estimation carried out entirely on-device, three times as many people (39 percent) expressed support.

“The debate has moved from ‘should platforms verify age?’ to ‘how do they do it?’ and we’re seeing a rapid shift toward enforceable age controls that provide data privacy guarantees,” says Deepak Tewari, CEO of Privately SA in an announcement revealing the survey results. “With so many countries now actively regulating or reviewing children’s access to social media, it makes reliable age assurance unavoidable. Facial Age Estimation technology (FAE) allows platforms to meet these requirements without asking users to share IDs, which is critical for both privacy and scale adoption.”

Privately completed 5 million on-device age checks in 2025, and serves three of the ten largest social media platforms operating in Australia, according to the company announcement.

A dangerous virus (the bans, that is)

“Social media bans are dangerous,” CEPA (the U.S.-based Center for European Policy Analysis) says in an article arguing that such age restrictions are a mistake.

In support of this assertion, it offers several question-begging premises: “If platforms must prove age and identity, it endangers everybody’s privacy. An account ban is easy to announce and hard to enforce. Platforms have to decide what counts as ‘reasonable’ proof of age, how often to re-check, and how to investigate without locking out legitimate users or collecting sensitive data,” CEPA Tech Policy Program Senior Researcher Dr. Anda Bologa writes. No figures or examples are offered. The technology on offer, or its use dating back several years, are not mentioned.

“Teenagers can migrate to smaller apps, borrow credentials, or stay logged out, shifting the risk rather than reducing it,” Bologa writes.

Age verification at scale is “intrusive, error-prone, and expensive.”

The article asserts that age assurance compliance requirements give larger companies an advantage over smaller ones and that teenagers will inevitably flock to smaller, less regulated corners of the internet. The compliance reports and risk assessments of the EU’s Digital Services Act, “without building a dangerous permanent verification regime,” is preferable, Bologa argues.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Australia plans biometric liveness detection refresh for national digital ID

Australia plans to contract a biometric liveness detection capability to support the country’s national digital ID and protect it against…

 

Deepfake threats exploiting the trust inside corporate systems

New York-based AI security company Reality Defender is warning businesses that deepfake threats have moved beyond isolated fraud schemes and…

 

Under AMLA, 95% false positives become a regulator’s problem

By Max Irwin, Regional Vice President EU, Shufti By the end of the day on 22 April 2026, around forty…

 

Sri Lanka defines trust boundaries ahead of digital ID rollout

Sri Lanka’s Unique Digital ID (SL-UDI project is placing trust architecture at the center of its rollout, with officials emphasizing…

 

Biometrics demand holds firm across core and emerging use cases

A UK court ruling that live facial recognition use by police does not violate human rights could have major implications…

 

ADVP and NO2ID back DVS framework from opposing perspectives

The UK’s Digital Verification Service (DVS) trust framework is drawing support from both industry and long-time critics of centralized identity…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events