FB pixel

‘Chat, am I old enough to flirt with you?’ OpenAI introduces age checks for GPT

Announcement coincides with testimony from parents who say AI hurt their kids
Categories Age Assurance  |  Biometrics News
‘Chat, am I old enough to flirt with you?’ OpenAI introduces age checks for GPT
 

It is hard to imagine worse publicity for a tech firm than “our product convinced a teen to kill themselves.” Yet that’s the mess OpenAI finds itself in, after its beloved flagship chatbot, ChatGPT, was implicated in the death of U.S. teenager Adam Raine in April. The family has since sued, alleging that ChatGPT mentioned suicide 1,275 times to Raine in their interactions and repeatedly provided advice on specific methods for accomplishing the task.

Now, the company is attempting to formally address the issue – by introducing age assurance.

Silicon Valley loves its principles, and a statement credited to OpenAI and World CEO Sam Altman says “some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy.”

The company pledges to take privacy seriously and treat its users “like adults.” For example, it says, “the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it.”

‘Chat, what’s the death cap mushroom?’ New fixes limit bad advice

Bold faced font is saved for the third principle, which is about “protecting teens.”

“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Altman says. That protection is apparently to take the form of an “an age-prediction system” that will “estimate age based on how people use ChatGPT” and default to under-18 settings “if there is doubt.”

Technically, this would be classified as an age inference method, which makes assumptions based on behavior and patterns. In this, it is similar to YouTube’s recently announced ambient age check system.

Alstman says that “in some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

For those identified as teens, there will be tighter restrictions on “flirtatious talk” or “discussions about suicide or self-harm even in a creative writing setting.”

“If an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

Altman’s statement closes with a justification: “we realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

‘Chat, are you the Reaper?’ More cases say AI can be deadly

Given Altman’s history with regulation, one might be tempted to translate that as, “a tragic event and ensuing lawsuit have forced us to introduce measures we’d prefer not to introduce.” But there is also a hint of panic in the move; according to a report from 404 Media, the chatbot has also been implicated in the murder-suicide of 56-year-old man, and is facing a new lawsuit related to the suicide of a 13-year-old girl.

The piece notes that ChatGPT used to be much more limited in how it was allowed to interact with users. “Competition from other models, especially locally hosted and so-called ‘uncensored’ models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.”

This week, Adam Raine’s parents testified to U.S. Congress, explaining how trusting ChatGPT proved to be fatal for their son. CBS News quotes his father, Matthew: “What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

The report also quotes Josh Golin, executive director of Fairplay, a group advocating for children’s online safety, who believes OpenAI’s announcement was timed to coincide with the Raines’ testimony. “This is a fairly common tactic – it’s one that Meta uses all the time – which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company.”

“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them. We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK Home Office eyes suppliers for SCBP biometrics platform

The Home Office is hosting a preliminary market engagement event to engage with potential suppliers for two not-yet-guaranteed future procurements…

 

Meta uses AI profiling to infer user age, enforce teen restrictions

Meta says it has begun using AI to detect and remove users under 13 from its platforms, and to automatically…

 

Market for agentic commerce keeps growing, outpacing rails

According to Grandview Research, the global agentic commerce market size was worth $5.71 billion in 2025 and is projected to…

 

DRC seeks consultant for ambitious digital transformation, DPI project

The Democratic Republic of Congo is seeking a consultant as it launches a massive Digital Transformation Project. The wide-ranging project…

 

South Africa gazettes digital ID draft regulation, seeks comments

South Africans have up to June 6 to submit comments on draft amendments to the country’s Identification Act of 1997…

 

FTC settlement targets sale of mobile location data linked to sensitive sites

The Federal Trade Commission (FTC) has moved to prohibit Sandpoint, Idaho-based data broker Kochava and its subsidiary, Cedar Rapids, Iowa-based…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events