Calls for chatbot age assurance increase as allegations of self-harm, psychosis grow

A key question facing society is, at what point do we determine that something is a threat to our health and safety? The answer is rarely simple; from lead paint to cigarettes to climate change, hazards to our collective well-being have their advocates, many of which have deep pockets with which to feed their efforts.
In the age of large language models, cases of AI chatbots being accused of counseling users to commit suicide are becoming common. Last week in California courts, four new suits were filed against OpenAI, the creator of ChatGPT, alleging wrongful death, assisted suicide, involuntary manslaughter and negligence. According to a report in the New York Times, the four suits are accompanied by three additional cases of people claiming ChatGPT contributed to their mental health breakdowns.
Many of the most publicized cases of Chat-induced suicide involve teenagers or young people, but the deceased in the four new cases do not hew to a single demographic. Amaurie Lacey was a 17-year-old from Georgia. Florida man Joshua Enneking was 26; Texas man Zane Shamblin was 23. Oregon’s Joe Ceccanti, who used Chat for years before it led him to a psychotic break, was 48 when he killed himself.
Allan Brooks is also 48; according to the Times, the Ontario man is suing OpenAI after he “came to believe that he had invented a mathematical formula with ChatGPT that could break the internet and power fantastical inventions.” He’s now on short-term disability leave.
Indeed, the Tech Justice Law Project, which filed the cases, says all seven cases were filed on one day to “show the variety of people who had troubling interactions with the chatbot.”
The phenomenon is not limited to the U.S. The BBC has the story of Viktoria, a young Ukrainian woman displaced by the war, whose conversations with ChatGPT led it to encourage her to commit suicide; she got help instead.
Safeguards for chatbots catching on around the world
OpenAI is not pretending this isn’t happening. In fact, it recently opted to release internal data showing that “0.07 percent of users might be experiencing ‘mental health emergencies related to psychosis or mania’ per week, and that 0.15 were discussing suicide.” Per the Times, scaled to all of OpenAI’s users, those percentages are equivalent to half a million people with signs of psychosis or mania, and more than a million potentially discussing suicidal intent.
The company has introduced some safeguards and parental controls for the chatbot, and says it has updated its model to respond more appropriately to inquiries about suicide. A statement from the firm says ChatGPT is trained “to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.” It claims it is working with mental health clinicians to refine its safety tools.
But the regulatory sword hovers overhead, as questions swirl over OpenAI’s prioritizing speed to market over safety. The lawsuit regarding Amaurie Lacey’s death says it was “neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market.”
As age assurance measures for adult content and social media take effect around the world, regulators are already turning their attention to AI. Canada is considering putting digital age check requirements on large language model chatbots. In the U.S., Missouri Senator Josh Hawley has introduced S3062, “a bill to require artificial intelligence chatbots to implement age verification measures and make certain disclosures, and for other purposes.” And Australia’s eSafety Commissioner has registered six new codes under the Online Safety Act, partly aimed at deploying age assurance tech to restrict children’s access to chatbots.
Ofcom can already prosecute chatbots, says Baroness Merron
In the UK, politicians are asking the government what they plan to do about so-called AI psychosis, which encompasses signs of suicidal ideation. Debating in the House of Lords, Baroness Gillian Merron notes that “Ofcom has made it quite clear that if an AI service searches the live internet and returns results it will be regulated under the Act as a search service.” This implies that chatbots are already subject to scrutiny – and punitive measures – from the UK regulator.
On LinkedIn, Christopher Holmes, Baron Holmes of Richmond, points to his AI Regulation Report, which comments on a proposed AI Regulation Bill. Holmes supports human-centric AI regulation that “can, and must, be pro-innovation, pro-investment, pro-consumer, pro-creative, pro-citizen rights.”
On the same platform, Scott Wallace, a PhD in Clinical Psychology, runs down some of the numbers OpenAI has reported since launching ChatGPT5, which integrated more safeguards against conversations turning deadly. The firm says it has seen a 65-80 percent reduction in unsafe responses across sensitive domains, 39 percent fewer unsafe responses in psychosis and mania compared with GPT-4o, ands a 52 percent reduction in unsafe handling of suicidal ideation.
“On the surface,” says Wallace, “these are genuine gains. They suggest the GPT-5 model is better at avoiding dangerous missteps, better at following behavioural guidelines, and more consistent under clinical review. But we need to be clear about what these numbers represent and what they don’t. These are gains against OpenAI’s own taxonomies and test protocols, not evidence that fewer people in the real world are being harmed, diverted from care, or supported toward recovery.”
Why are we doing this again?
A common theme in any political statement about LLMs and “AI” is that, despite the risks, it obviously has huge potential to change human life for the better. Even those encouraging regulation don’t want to stifle the technology too much. But there is increasing evidence that LLM chatbots, as currently designed, do much more damage than good. Beyond telling people to kill themselves, according to a recent MIT study, they may be eroding our critical thinking skills. Their operation and continued growth requires the construction of massive data centers that consume outsized amounts of fresh water.
Moreover, despite the load they are currently carrying for the U.S. economy, it has yet to be proven that they’re even a good business proposition; as recently as last week, Sam Altman had to go on record to say the company is not seeking government subsidies to finance its construction projects. “OpenAI Races to Quell Concerns Over Its Finances,” says a recent headline in the New York Times.
All of which should raise the primary question: Chat, is this even a good idea? As biometrics vendors know, the potential gold rush around algorithmic technology is not limited to LLMs and generative tech. This is one specific use for the technology, which continues to prove the deep risks involved in using it. In North America, steel tipped lawn darts were banned after three children died from playing with them. Based on known cases, ChatGPT’s tally is at least five – and counting.
Article Topics
age verification | Australia age verification | Canada | chatbots | ChatGPT | children | eSafety Commissioner | OpenAI | regulation | UK age verification | United States






Comments