Australia expands age checks to AI chatbots, app stores, porn sites and more

After becoming the first country to ban under-16s from social media, Australia has now gone further by implementing one of the world’s most comprehensive age verification regimes for underage users, covering AI chatbots, app stores, online gaming, search engines, messaging services and pornography sites. The rules are already drawing criticism from some firms: Aylo, the owner of explicit sites including Pornhub, has responded by blocking Australian users from its platforms entirely.
On Monday, the country implemented the Age-Restricted Material Codes, requiring designated platforms to introduce age verification measures, such as facial age estimation, digital wallets and photo IDs, for materials such as high-impact violence, pornography, self-harm material and dangerous content such as suicide and disordered eating.
“These industry-developed codes shift that responsibility back where it belongs – onto the companies designing these digital platforms and profiting from their users – and will give children back a little more of their childhoods,” says eSafety Commissioner Julie Inman Grant.
App stores will be required to prevent under-18s from purchasing or downloading R18+ apps, while online gaming platforms must implement age verification for games classified R18+ by the Australian Classification Board.
The codes will also extend to AI-powered chatbots and companions such as OpenAI’s ChatGPT, with any platform capable of generating sexually explicit content, high-impact violence, or self-harm material required to verify the age of users.
General messaging services are not required to conduct age checks, but adult messaging services specializing in explicit content must verify that users are 18 or over. Social media platforms that permit pornography or self-harm material must also confirm that users are adults.
Search engines will take a different approach: users not logged into an account will have pornographic and high-impact violent content blurred by default, a protection that will also apply to logged-in under-18s, while adults will see unblurred results unless they choose otherwise.
Penalties for non-compliance could amount to AU$49.5 million (US$34.7 million).
Aylo blocks Australian users, AI companies (mostly) complying with new regs
Following the introduction of the new age assurance regulations, adult content company Aylo, which operates popular porn sites Pornhub, RedTube, YouPorn and Tube8, began blocking Australian users.
eSafety has responded to the Aylo decision by highlighting that the porn company was actively involved in drafting the age check regulations.
“The codes, the majority of which come into force on 9 March 2026, were drafted by representatives of the technology industry, including porn providers such as Aylo,” says the regulatory agency. “Aylo has indicated it will only offer ‘safe for work’ content on its free services in the Australian market instead of implementing age-check requirements for age-restricted material on its free services. This is ultimately a business decision for them.”
Aylo has previously blocked users in more than 20 U.S. states, France and the UK. Its owner, Canada-based Ethical Capital Partners (ECP), has argued that laws such as the UK Online Safety Act are not working, even though traffic to sites such as Pornhub has been falling.
According to reports, the firm has sent letters to Apple, Google, and Microsoft, urging them to support device-based age verification in their app stores and operating systems.
Unlike Aylo, many large AI chatbot companies seem to be complying with Australia’s new age assurance rules.
A week before the regulation was introduced, popular services such as ChatGPT, Replika and Claude had begun rolling out age assurance systems or blanket filters, according to a survey from Reuters. One high-profile exception is Elon Musk’s Grok, currently under global scrutiny for generating nonconsensual deepfakes.
In total, nine text-based AI products have rolled out or announced plans for an age assurance system, while another 11 platforms have blanket content filters or plan to block all Australians from using their services. The survey covered 50 AI platforms.
eSafety has previously expressed concern over AI companies “leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage.”
The agency is currently conducting an evaluation of how the Social Media Minimum Age (SMMA) Act is working in practice. According to initial results, the country’s social media age restriction removed access to about 4.7 million accounts in the first half of December.
The government’s headline project leading up to Australia’s “under-16 ban” was the Age Assurance Technology Trial, which evaluated a variety of age verification, age estimation and age inference providers to gauge the feasibility of the technology. KJR provided technical testing and oversaw tests conducted in conjunction with Australian schools. In a recent Biometric Update Podcast, KJR director Andrew Hammond discussed the evaluation, and where the larger age assurance project in Australia stands.
Article Topics
age verification | app store age verification | Australia | Australia age verification | Aylo | biometric age estimation | chatbots | eSafety Commissioner | regulation | social media







Comments