‘Chat, am I old enough to flirt with you?’ OpenAI introduces age checks for GPT

It is hard to imagine worse publicity for a tech firm than “our product convinced a teen to kill themselves.” Yet that’s the mess OpenAI finds itself in, after its beloved flagship chatbot, ChatGPT, was implicated in the death of U.S. teenager Adam Raine in April. The family has since sued, alleging that ChatGPT mentioned suicide 1,275 times to Raine in their interactions and repeatedly provided advice on specific methods for accomplishing the task.
Now, the company is attempting to formally address the issue – by introducing age assurance.
Silicon Valley loves its principles, and a statement credited to OpenAI and World CEO Sam Altman says “some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy.”
The company pledges to take privacy seriously and treat its users “like adults.” For example, it says, “the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it.”
‘Chat, what’s the death cap mushroom?’ New fixes limit bad advice
Bold faced font is saved for the third principle, which is about “protecting teens.”
“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Altman says. That protection is apparently to take the form of an “an age-prediction system” that will “estimate age based on how people use ChatGPT” and default to under-18 settings “if there is doubt.”
Technically, this would be classified as an age inference method, which makes assumptions based on behavior and patterns. In this, it is similar to YouTube’s recently announced ambient age check system.
Alstman says that “in some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
For those identified as teens, there will be tighter restrictions on “flirtatious talk” or “discussions about suicide or self-harm even in a creative writing setting.”
“If an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
Altman’s statement closes with a justification: “we realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
‘Chat, are you the Reaper?’ More cases say AI can be deadly
Given Altman’s history with regulation, one might be tempted to translate that as, “a tragic event and ensuing lawsuit have forced us to introduce measures we’d prefer not to introduce.” But there is also a hint of panic in the move; according to a report from 404 Media, the chatbot has also been implicated in the murder-suicide of 56-year-old man, and is facing a new lawsuit related to the suicide of a 13-year-old girl.
The piece notes that ChatGPT used to be much more limited in how it was allowed to interact with users. “Competition from other models, especially locally hosted and so-called ‘uncensored’ models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.”
This week, Adam Raine’s parents testified to U.S. Congress, explaining how trusting ChatGPT proved to be fatal for their son. CBS News quotes his father, Matthew: “What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”
The report also quotes Josh Golin, executive director of Fairplay, a group advocating for children’s online safety, who believes OpenAI’s announcement was timed to coincide with the Raines’ testimony. “This is a fairly common tactic – it’s one that Meta uses all the time – which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company.”
“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them. We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”
Article Topics
age inference | age verification | behavioral analysis | ChatGPT | children | OpenAI






Comments