FB pixel

Redefining online safety: The evolution of age verification laws

Redefining online safety: The evolution of age verification laws
 

By: Dan Yerushalmi, CEO, AU10TIX

Today’s youth are more Internet-savvy than any previous generation, with sites like TikTok, Snapchat, Instagram and others specifically catering to them. However, the Internet is a vast and dangerous place, and ensuring the protection of minors remains a significant challenge. From cyberbullying and online predators to exposure to explicit content and the risk of developing addictive behaviors, children and teens encounter myriad dangers online. To combat this, many states are drafting new legislation that aims to protect children by requiring platforms to confirm users’ ages, allowing for enhanced safety and accountability online.

Age verification is recognized as a critical tool to ensure their safety: By confirming users’ ages, platforms can implement tailored safety measures and controls to mitigate potential risks and safeguard minors from harmful content and interactions.

These laws foster a culture of responsibility and accountability among both platform operators and users. It is essential for creating secure online environments for young users, and promoting a safer digital experience for all.

Online threats

Online platforms have gained widespread popularity among people of all ages, but they also pose great risks to the youngest users. Age verification has emerged as a critical tool to ensure the safety of minors online and align with industry standards.

Several states are working on legislation to prevent children from accessing social media platforms and adult content, but many other websites also present dangers that can be addressed with age verification. For example, ecommerce and online retail settings must comply with legal and ethical standards when selling age-restricted products such as alcohol, tobacco, and certain pharmaceuticals.

Similarly, in gaming and gambling, revenue streams are soaring due to increasing popularity, but the ease of access and the allure of these platforms to minors raises concerns about potential addiction and regulatory non-compliance. Dating and social networking apps also pose threats to children, as users engage in personal interactions and share sensitive information, making them vulnerable to exploitation, harassment, and exposure to inappropriate content.

KOSA regulation

The Children’s Online Privacy Protection Act (COPPA) was designed to protect underaged individuals online, but it mostly deals with data privacy. Other legislation aims to prevent kids from accessing inappropriate online material.  However, these laws lack effective enforcement mechanisms, allowing users to easily bypass age restrictions. Recognizing these shortcomings, the Kids Online Safety Act (KOSA) federal bill was introduced in July 2023 to establish legal standards for a wide range of online service providers.

KOSA extends its reach beyond social media platforms, encompassing a diverse array of online services such as online video games, messaging applications, video streaming services, and other online platforms used or reasonably likely to be used by minors. Unlike previous regulations, KOSA calls for platforms to design products responsibly, shield children from harmful content, and participate in audits to assess their effectiveness. This shift signifies a departure from simple checkbox verifications and highlights the need for more comprehensive age verification measures.

Laws impacting age verification

To address growing concerns about minors’ safety online, several states in the United States, including Florida, California, Texas, Louisiana, Arkansas, and Utah, have passed laws that impose stricter regulations on underage users.

These state laws go beyond federal guidelines like COPPA and KOSA, demanding specific features from age verification tools and imposing fines for non-compliance. For example, the California Age-Appropriate Design Code Act, modeled after the UK’s Age-Appropriate Design Code, introduces significant changes to online privacy and content availability for minors in California. Scheduled to come into effect on July 1, 2024, the Act requires impacted businesses to estimate the age of minor users with a reasonable level of certainty and set default privacy settings to a high level for minors.

Similarly, Texas HB18, also known as the SCOPE Act, imposes stringent obligations on online platforms, requiring age verification for users at sign-up and parental consent for various account activities. These state laws align with the broader intent of regulatory frameworks like COPPA and GDPR, underscoring the need for robust age verification processes that can protect minors from explicit content and harmful influences online.

The Protecting Kids on Social Media Act is another notable bill that has garnered attention for proposing to set the minimum age of social media users at 13, with parental consent required for teens aged 13 to 18. The bill also prohibits platforms from using algorithms to recommend content to young users.

While the intention behind such legislation is to safeguard children from potential online harm, critics argue that outright bans on children’s social media access may not be the most effective solution. Proposed bans raise concerns for both children and businesses. For children, social media serves as a vital space for social interaction, learning, and self-expression. Prohibiting children from accessing social media platforms could isolate them from supportive communities and impede their digital literacy development

And for businesses, banning children from accessing these platforms would not only result in a loss of potential customers but also disrupt the ecosystem of content creation and engagement that drives these platforms’ success. Instead of outright bans, policymakers and industry stakeholders should focus on implementing robust age verification measures.

Challenges in implementing age verification laws

Despite the clear necessity for age verification laws, there are many hurdles to their implementation. One major obstacle is the contention surrounding privacy concerns. Critics argue that robust measures may infringe upon users’ privacy rights by requiring them to disclose personal information. Balancing the need for stringent age verification with the protection of individual privacy presents a significant challenge for lawmakers.

Additionally, there are concerns about the feasibility and effectiveness of age verification systems, especially across international borders and on platforms with diverse user bases. The lack of standardized protocols adds complexity to the development of reliable and privacy-respecting age verification processes, making it challenging to pass comprehensive legislation. However, as the risks posed to minors online become increasingly apparent, there is growing pressure on policymakers to find viable solutions that address these concerns while prioritizing child safety.

Balancing speed and compliance

In order to retain their customers and remain profitable, online businesses cannot make their age verification processes overly onerous or time-consuming.  Striking a balance between robust processes and a seamless user experience is crucial. Automation can help. Innovative technologies such as identity verification streamline and accelerate the age verification procedure so companies can simultaneously protect their youngest customers, comply with regulatory requirements, and provide a great user experience.

The Internet will likely continue to play a significant role in the lives of many minors, so the evolution of age verification legislation and technology remains crucial. By adhering to regulatory frameworks like KOSA and embracing global standards for age verification, online platforms, regulatory agencies, technology developers, and advocacy groups can collaboratively create safer digital environments for all users.

Ultimately, the continuous evolution of age verification reflects a collective commitment to prioritize online safety and protect the most vulnerable members of our digital society.

About the author

Dan Yerushalmi is CEO of AU10TIX.

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

New research explores AI manipulation attacks on face biometric systems

A pair of new research papers address sophisticated fraud attempts on biometric systems using AI, in one case to carry…

 

As biometrics infiltrate the fan experience, will anyone challenge Wicket at the game?

In terms of biometric use cases, security, privacy and fraud prevention are all good – but what if you just…

 

US workers concerned about employers’ digital snooping: GAO

The U.S. Government Accountability Office (GAO), the research arm of the U.S. Congress, this week released a report on the…

 

ID.me and Liminal strengthen executive teams

ID.me and Liminal have made appointments to strengthen their executive leadership teams. ID.me has named Scott Meyer its new chief…

 

Socure takes IDV solutions global, available in over 190 countries

Socure has announced the expansion of its identity verification offerings, which provides coverage for all International Civil Aviation Organization (ICAO)-compliant…

 

Panini expands BioCred suite with new cloud-based IDV platform

Payments processing tech provider Panini is expanding its BioCred identity verification tool, with the introduction of a new cloud-based service…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events