FB pixel

Australia sets out conditions for social media age restrictions

Platforms must take ‘reasonable steps’ to avoid fines
Categories Age Assurance  |  Biometrics News
Australia sets out conditions for social media age restrictions
 

Australia is entering uncharted regulatory waters as it seeks to implement world-first social media age restrictions.

Age assurance systems, including biometric age estimation, facial age analysis, identity document verification and parental control measures, are an effective way to protect young people from age-inappropriate content online, according to the Australian government’s final report of the independent Age Assurance Technology Trial (AATT).

Communications Minister Anika Wells promised that the Albanese Government will push forward in its plans to introduce age restrictions for accessing social media and other platforms with the publication of the report.

The eSafety Commissioner has said the age restrictions are “likely” to apply to Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter) and YouTube, among other platforms. Additionally, it has made more specific reference to when the age restrictions will apply.

X Executive Chairman and CTO Elon Musk complained last year that age checks for social media are “a backdoor way to control access to the Internet for all Australians.” YouTube objected to its inclusion in the category, arguing that it is more like television than social media, and even drafting in the Wiggles to make the case. But the government believes it fits the criteria.

Generally, the conditions for age restriction will apply to social media platforms that meet three criteria. These are: the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users; the service allows end-users to link to, or interact with, some or all of the other end-users; the service allows end-users to post material on the service.

Under legislative rules set out by the Minister for Communications in July, online gaming and standalone messaging apps are among the services excluded from these conditions. The legal text on that is here.

However, messaging platforms that incorporate social media-like functions, like enabling user interactions beyond simple messaging, could fall under the scope of age restrictions, which also applies to messaging tools embedded within social media accounts that are themselves subject to age restrictions.

The Commissioner has stressed that it is not a social media ban but a delay to having accounts, with no penalties faced by under-16s who access an age-restricted social media platform, or for their parents or guardians. But the social media platforms may face penalties if they don’t take “reasonable steps” to prevent under-16s from having accounts. Such penalties include court-imposed fines of up to 150,000 penalty units for corporations, which currently amounts to a total of 49.5 million Australian dollars (US$32.6 million).

The eSafety Commissioner says the restrictions aim to protect young Australians from “pressures and risks” that users can be exposed to while logged in to social media accounts, claiming they come from design features that encourage spending more time on screens, while showing content that can harm their health and wellbeing.

Australia has set December 10 as the date when minimum age obligations commence, with the results of public consultations released. The consultation focused on the eSafety Commissioner’s implementation of the 2024 Online Safety Amendment (Social Media Minimum Age) Act, rather than on the contents of the legislation itself.

The summary document’s key themes touch on the need for age assurance tech that prioritizes privacy and user consent. Systems should be robust, accurate and fair. Flexibility and scalability are noted as key advantages, as is a multi-layered approach integrated throughout the user journey. There is potential in zero knowledge proofs (ZKP) and tokenization. Some experts and educators have reservations about biometrics, notably around privacy and the potential for bias. Privacy legislation and standards are seen as laying a baseline on which to base age assurance practice.

The responses from eSafety’s National Online Safety Education Council and online safety educators show “strong support for shifting greater responsibility to platforms rather than parents and carers.”

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Agentic AI working groups ask what happens when we ‘give identity the power to act’

The pitch behind agentic AI is that large language models and algorithms can be harnessed to deploy bots on behalf…

 

Nothin’ like a G-Knot: finger vein crypto wallet mixes hard science with soft lines

Let’s be frank: most biometric security hardware is not especially handsome. Facial scanners and fingerprint readers tend to skew toward…

 

Idemia Smart Identity negotiates with Nepal, nears ID document issuance in Armenia

A pair of deals for Idemia Smart Identity to supply biometric ID documents, one in Nepal and one in Armenia,…

 

Rapid expansion of DHS’s citizenship database raises new election concerns

Over the past month, the Department of Homeland Security (DHS) has quietly transformed the Systematic Alien Verification for Entitlements (SAVE)…

 

Aurigin adds voice liveness detection to Swisscom identity infrastructure

Aurigin.ai is collaborating with Swisscom Digital Trust to strengthen existing KYC processes with voice-based liveness verification and AI deepfake detection,…

 

Self completes $9M seed round, introduces points scheme for verification

Self, which provides zero-knowledge identity and proof-of-personhood (PoP) infrastructure, has announced the completion of a nine-million-dollar seed raise earlier this…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events