FB pixel

Australia sets out conditions for social media age restrictions

Platforms must take ‘reasonable steps’ to avoid fines
Categories Age Assurance  |  Biometrics News
Australia sets out conditions for social media age restrictions
 

Australia is entering uncharted regulatory waters as it seeks to implement world-first social media age restrictions.

Age assurance systems, including biometric age estimation, facial age analysis, identity document verification and parental control measures, are an effective way to protect young people from age-inappropriate content online, according to the Australian government’s final report of the independent Age Assurance Technology Trial (AATT).

Communications Minister Anika Wells promised that the Albanese Government will push forward in its plans to introduce age restrictions for accessing social media and other platforms with the publication of the report.

The eSafety Commissioner has said the age restrictions are “likely” to apply to Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter) and YouTube, among other platforms. Additionally, it has made more specific reference to when the age restrictions will apply.

X Executive Chairman and CTO Elon Musk complained last year that age checks for social media are “a backdoor way to control access to the Internet for all Australians.” YouTube objected to its inclusion in the category, arguing that it is more like television than social media, and even drafting in the Wiggles to make the case. But the government believes it fits the criteria.

Generally, the conditions for age restriction will apply to social media platforms that meet three criteria. These are: the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users; the service allows end-users to link to, or interact with, some or all of the other end-users; the service allows end-users to post material on the service.

Under legislative rules set out by the Minister for Communications in July, online gaming and standalone messaging apps are among the services excluded from these conditions. The legal text on that is here.

However, messaging platforms that incorporate social media-like functions, like enabling user interactions beyond simple messaging, could fall under the scope of age restrictions, which also applies to messaging tools embedded within social media accounts that are themselves subject to age restrictions.

The Commissioner has stressed that it is not a social media ban but a delay to having accounts, with no penalties faced by under-16s who access an age-restricted social media platform, or for their parents or guardians. But the social media platforms may face penalties if they don’t take “reasonable steps” to prevent under-16s from having accounts. Such penalties include court-imposed fines of up to 150,000 penalty units for corporations, which currently amounts to a total of 49.5 million Australian dollars (US$32.6 million).

The eSafety Commissioner says the restrictions aim to protect young Australians from “pressures and risks” that users can be exposed to while logged in to social media accounts, claiming they come from design features that encourage spending more time on screens, while showing content that can harm their health and wellbeing.

Australia has set December 10 as the date when minimum age obligations commence, with the results of public consultations released. The consultation focused on the eSafety Commissioner’s implementation of the 2024 Online Safety Amendment (Social Media Minimum Age) Act, rather than on the contents of the legislation itself.

The summary document’s key themes touch on the need for age assurance tech that prioritizes privacy and user consent. Systems should be robust, accurate and fair. Flexibility and scalability are noted as key advantages, as is a multi-layered approach integrated throughout the user journey. There is potential in zero knowledge proofs (ZKP) and tokenization. Some experts and educators have reservations about biometrics, notably around privacy and the potential for bias. Privacy legislation and standards are seen as laying a baseline on which to base age assurance practice.

The responses from eSafety’s National Online Safety Education Council and online safety educators show “strong support for shifting greater responsibility to platforms rather than parents and carers.”

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

New Reality Defender Ethics Committee not mere theater, says CEO

“Most ethics committees are theater. This is not one of those.” So begins a new post from Reality Defender CEO…

 

Mobai face biometrics, liveness selected for Norway’s public sector digital ID

Mobai has won a contract to provide face biometrics for Norway’s national digital ID, in partnership with Commfides Norge AS….

 

Daon launches continuous identity tools to counter workforce fraud

Employers are increasingly facing risks such as AI-generated resumes, synthetic identities and deepfake impersonation during video interviews. Gartner predicts that…

 

TISA feedback on UK digital ID address inclusion highlights sectoral divergence

The UK government is seeking broad feedback in its consultation on the proposed national digital ID, so comments tend to…

 

New York proposes biometric checks for sports betting apps

New York officials are considering new sports betting safeguards that could require biometric confirmation from users before they place online…

 

Australia credential register blocks 750,000 fraudulent ID checks post-Optus breach

Australia’s response to the Optus data breach has blocked 750,000 fraudulent identity checks, as a government register designed to prevent…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events