EU Parliament supports age assurance to block kids under 16 from social media

The European Parliament is supporting an EU-wide digital minimum age of 16 for accessing social media, video-sharing platforms and AI companions.
On Wednesday, Members of the Parliament (MEPs) voted to adopt a report which supports the European Commission in introducing age assurance systems and advocates a ban on the most harmful and addictive practices on online platforms.
Alongside the enforcement of the Digital Services Act (DSA), the measures are set to “dramatically raise the level of protection for children, says Christel Schaldemose, the Danish social-democrat who led the report.
“We are finally drawing a line,” says Schaldemose. “We are saying clearly to platforms: your services are not designed for children. And the experiment ends here.”
The MEPs expressed support for age assurance to be carried out through the EU age verification app, currently under development by Scytales, and the European Digital Identity (EUDI) Wallet (although the AVPA has argued that the EUDI Wallet does not meet the EU’s double-blind privacy requirement for age assurance).
euCONSENT’s AgeAware token-based age assurance platform went live earlier this month. That project originally brought together European data protection authorities with age check providers Yoti, AgeChecked and VerifyMy.
The non-legislative report gathered widespread support, with 483 votes in favor, 92 against and 86 abstentions.
The report has previously been backed by major political parties. A significant portion of the opposition and abstentions originated from right-leaning members who support age verification in principle but believe such policies should be determined by individual nations rather than at a broader level, while also raising concerns about potential surveillance and restrictions on free speech.
Privacy-first age assurance
During the vote, MEPs called for age assurance to be accurate and preserve minors’ privacy.
According to the DSA, online platforms accessible to minors are required to introduce measures ensuring minors have a high level of privacy, safety and security. The Commission guidelines recommend a risk-based approach: High-risk platforms, for instance, would have to adopt “accurate, robust and privacy-preserving” age verification mechanisms.
Senior managers could be made personally liable in cases of serious and persistent non-compliance, especially when it comes to age verification. In addition, introducing an age assurance system would not relieve a platform from the responsibility of ensuring its products are safe and age-appropriate.
The report notes that the EU currently has a fragmented approach to age assurance. Some member states have implemented advanced measures to protect minors, while others are lagging. An EU-level solution should be discussed, but consideration must be paid to cultural norms, societal values and public sensitivities related to age assessment tools.
The Parliament also called for other measures aimed at protecting children online, including disabling addictive features, banning engagement-based recommendation systems and tackling targeted ads and influencer marketing. Urgent action is needed to address the rise of generative AI tools, including deepfakes, companionship chatbots, AI agents and AI-powered nudity apps, the MEPs say.
The report cites research on internet addiction among minors from the European Parliamentary Research Service (EPRS). MEPs also highlighted that more than 90 percent of Europeans say that putting in place age assurance measures to restrict age-inappropriate content is an urgent issue, according to the 2025 Eurobarometer.
Article Topics
age verification | children | Digital Services Act | EU age verification | Europe | legislation | regulation | social media







Comments