FB pixel

Social media’s age verification crisis: Can platforms solve the technical and ethical puzzle?

Social media’s age verification crisis: Can platforms solve the technical and ethical puzzle?
 

The demand for robust, ethical, and privacy-first verification systems is growing rapidly, explains Liudas Kanapienis, CEO and co-founder of Ondato. Tools that meet these standards already exist, and companies are actively working with platforms to implement them responsibly.

Restrictions on children’s access to social media are a controversial topic that has captured the attention of numerous governments around the world. Age verification systems on tech platforms are suddenly becoming a pressing issue, requiring not only precision but also a quick turnaround time for implementation. However, this is not an overnight problem, but an ethical, global, urgent one.

As the landmark under-16 social media ban in Australia takes effect in December 2025, with other countries such as France and Spain proposing similar laws, technology firms are racing to develop technology that can verify a user’s age without compromising privacy or inadvertently locking out vulnerable groups. Fortunately, this is not uncharted territory.

A global regulatory push

The legislation was passed in Australia, which will make it illegal for children under the age of 16 to have accounts on sites like Instagram, TikTok, Snapchat, Facebook, X (formerly Twitter), Reddit, and potentially YouTube, which is currently under review for exemption.

The act places the onus on the platforms rather than the users and requires them to either take “reasonable steps” to ensure the verification of age or face fines up to AU$49.5 million per violation.

Nevertheless, with only five months to enforcement, the industry has not yet agreed on what reasonable steps specifically mean.

The technical puzzle: Scale, speed, and accuracy

The majority of the existing age gates on social media websites are based on self-reported birthdays, which are easily circumvented and generally perceived to be ineffective. Experiments with more sophisticated tools, such as AI-based facial age estimation, are in progress.

Nevertheless, such solutions are not perfect yet. ABC reported that during recent tests, the technology could only estimate the age of the people within 18 months in 85% of the cases. The tool has misidentified children as young as 15 as being in their 20s and 30s, which means that a 14-year-old child may be allowed to access a social media account and a 17-year-old may be denied. Some studies demonstrate that even the most accurate age-estimation software on the market today still gets an average error of 1.0 years. The other software alternatives misjudge the age of a person by an average of 3.1 years.

Bringing accurate and compliant age verification to hundreds of millions of users with real-time checks and low friction can be a big technical challenge, particularly across cross-device, cross-region, and cross-demographic groups. However, it’s also achievable, and necessary to meet the goals of these new laws.

Solutions should also be able to comply with a number of international regulations: Australia is insisting on a 16+ limit, France will set it at 15, and some jurisdictions will continue to permit access at age 13. This adds to the complexity of platforms that need to roll out a scalable verification system.

The ethical dilemma: Privacy vs. protection

Beyond the technical problems, there is a more fundamental ethical conflict: how to ensure child safety without compromising the personal data of users.

Many age verification technologies use sensitive data, such as facial scans, government-issued IDs, or biometrics, which presents major privacy concerns, especially within the context of laws like the General Data Protection Regulation (GDPR) in the EU.

Protective measures, if not designed with privacy in mind, can have unintended consequences: Over-collection of data often leads to mistrust and regulatory backlash.

Privacy-first solutions strike a balance between safety, compliance, and user experience. This includes using a layered or hybrid approach, where low-friction, non-invasive tools like AI-based age estimation are prioritized, and more sensitive methods (such as ID scans) are reserved only for cases that truly require them. This mirrors best practices in digital identity verification across banking and fintech, proving that effective age assurance doesn’t have to come at the cost of personal privacy.

Unintended consequences and workarounds

There is also growing concern that aggressive enforcement can only serve to drive young users into more dangerous activities. VPNs, fake ID generators, and third-party apps that spoof user data are prevalent, often containing malware, trackers, or phishing attempts.

Teens who attempt to circumvent restrictions may end up exposing themselves to identity theft, spyware, or even worse. It is not merely a policy failure – it turns into a cybersecurity problem.

A recent study in Australia revealed that 37% of children between the ages of 10 and 15 years reported having seen harmful or inappropriate content on sites such as YouTube. However, the study also demonstrated the significance of these platforms in educational content and cultural affiliation, especially among regional and economically disadvantaged groups. Again, this study is further evidence that the complexity of this matter cannot be handled rigidly. Without proper investigation and well-rounded legislation that considers both the benefits and drawbacks of social media, we will put our children at risk with our own actions.

A narrow window for innovation

With regulators breathing down their necks, social media platforms must speed up not only technical innovation but also transparency to the general public.

Unless users, parents, and regulators know how age verification works, or believe it is non-invasive and fair, no matter how sophisticated the technology is, it will not work. It will not be trusted by people.

Social media companies should take the lead from fintech firms, which have made investments in user education, open data practices, and cross-sector engagement with governments and watchdogs.

What comes next?

In the next few months, the Australian government is likely to provide further guidance on the acceptable standards of verification, such as whether or not YouTube will be exempt from the ban on under-16s. In the meantime, other governments around the world are also considering this approach.

The Online Safety Act, enacted by the UK in 2023, already requires a minimum of age-appropriate design for digital services. The EU is planning to take the safety of children more seriously, as outlined in the Digital Services Act. At the same time, several states in the USA have introduced bills on the need to have parental consent or age verification on social media for minors.

However, until a common framework or more advanced technology is developed, compliance will remain fragmented and uneven.

We are at a crossroads. When done right, age verification can make online safety a better place without breaking trust. And that’s exactly what responsible companies are focused on: developing privacy-first, compliant, and user-friendly solutions. But when done wrong, it risks pushing users underground, undermining privacy, and creating only the illusion of safety.

About the author

Liudas Kanapienis is CEO and co-founder of Ondato.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Here comes the age check: BU Podcast talks age assurance with AVPA’s Iain Corby

 Like it or not, age assurance is going mainstream. As regulators in the UK and Europe begin enforcing new…

 

Keyless’ ‘Zero-Knowledge Biometrics’ enable crypto wallet to meet MFA requirements

Swiss-licensed crypto wallet Relai has integrated biometric authentication from Keyless for logins, account recovery and account deletion. Keyless’ privacy-preserving authentication…

 

CBP biometric expansion at US borders moves ahead with new global entry plans

As the Trump administration doubles down on biometric surveillance at U.S. borders, Customs and Border Protection (CBP) is preparing for…

 

New Oloid privacy architecture to protect enterprise biometrics unveiled

Many enterprises scrambling to adopt biometrics to defend against fraud are struggling to ensure regulatory compliance and the trust of…

 

Biometrics top consumer choice to fend off AI fraud in finance

Veriff’s latest “The Future of Finance” report reveals that online identity verification fraud in financial services has surged with the…

 

Taiwan gathers perspectives on digital wallet as national infrastructure

Taiwan’s Ministry of Digital Development has concluded a series of workshops on the digital ID  wallet, bringing together experts and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events