FB pixel

For ChatGPT, OpenAI rolls out age inference system similar to YouTube’s

‘Age prediction’ model further complicates evolving language of age assurance
Categories Age Assurance  |  Biometrics News
For ChatGPT, OpenAI rolls out age inference system similar to YouTube’s
 

One of the more unheralded battles being decided in the development of the age assurance industry is how, exactly, to talk about it. What began as “age verification” has grown into the broader blanket term, age assurance, which also includes age estimation methods based on both biometrics and email, and age inference technology that analyzes behavioral and usage patterns to infer a user’s age.

One of the purposes of the international standard for age assurance, ISO/IEC 27566-1, is to standardize the lexicon with clear definitions. Australia’s Age Assurance Technology Trial also contributed to the effort. But for the age assurance sector, which is currently booming and changing at pace to meet new demands, there is ongoing tension between the need to stabilize the language of age assurance, and the desire to reframe, reinvent and promote evolving age assurance technologies.

The latest example comes from OpenAI, which has announced that it is rolling out “age prediction” on ChatGPT, its popular but controversial large language model (LLM) chatbot. According to an article from the company, age prediction can “help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens.”

Persona’s biometric age verification in large-language camouflage

The default age assurance process for ChatGPT falls back on an appeals process using traditional age verification based on Persona‘s selfie biometrics. The process is billed by OpenAI as a “fast,” “simple” and secure “identity-verification service,” but without clear and plain-spoken labels like “age verification,” “biometrics” or “facial recognition.”

OpenAI joins a growing list of companies using Persona’s biometric age verification. That list also includes Roblox, Substack, Reddit and an “experiment” by Discord.

‘Age prediction’ enters the lexical party, dressed in motley

The age prediction model “looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.”

In other words, it is categorically the same as the machine learning models YouTube has deployed for its age inference system: it reads and analyzes how a user interacts with ChatGPT, accruing ever more data – ostensibly to protect the well-being of youth, but also useful for feeding back into the model. “Deploying age prediction helps us learn which signals improve accuracy,” OpenAI says, “and we use those learnings to continuously refine the model over time.” ChatGPT is always learning, for one reason or another.

For teens, less gore and violent role play, fewer dangerous viral challenges

In terms of safeguards, OpenAI says that if its system determines that a user is under 18, “ChatGPT automatically applies additional protections⁠ designed to reduce exposure to sensitive content.” This includes graphic violence and gore; “sexual, romantic, or violent role play;” “content that promotes extreme beauty standards, unhealthy dieting, or body shaming;” and “viral challenges that could encourage risky or harmful behavior in minors.”

It also aims to limit exposure to “depictions of self-harm” – a particularly salient point, given the growing number of dead people counseled by ChatGPT to take their own lives.

The firm says its approach is “guided by expert input and rooted in academic literature about the science of child development and acknowledges known teen differences in risk perception, impulse control, peer influence and emotional regulation.”

None of this is very specific. ChatGPT has been designed to become indispensable to its users: an assistant, friend, relationship counselor, lover. A core tenet of AI’s marketing push is that it can be, and do, whatever you want it to. Internally, there are likely to be more fights about how many limits are acceptable, when part of the mission is to erase them altogether.

Chat, can you disambiguate this for me?

It is telling that OpenAI cannot avoid reaching for established language, even as it desperately tries to brand itself as an agent of civilizational change. “ChatGPT uses an age prediction model to help estimate whether an account likely belongs to someone under 18,” its article says.

To predict and to estimate are different verbs. Prediction is forecasting – guessing, based on information at hand, what might be true in the future. If ChatGPT is doing “prediction,” the implication is that it has already analyzed its users’ data, and probably knows how old they are anyway. Given that premise, we may infer that, until now, OpenAI has been perfectly fine serving users under 18 whatever content their heart desires, even if it walks them off a cliff.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Photo ID, proof of citizenship take center stage in US voting fight

The Safeguard American Voter Eligibility Act (SAVE) has become the centerpiece of a renewed congressional fight over who sets the…

 

AI fakery is turning fear into a voter suppression tool ahead of US elections

In the months leading up to the 2026 midterm elections which could see Democrats sweeping both the House and Senate,…

 

Alcatraz partners with gun violence group on school, workplace safety

Alcatraz has joined the Active Shooter Prevention Project (ASPP), a U.S.-based initiative that develops strategies to reduce risks in schools,…

 

V-Key gets PE firm backing to expand mobile digital identity security footprint

Singapore-headquartered digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has a new majority investor, with Tower Capital…

 

IDfy secures $52M to pursue digital ID trust services ambitions

Digital ID verification firm IDfy has obtained funding of 476 crore Indian rupees, approximately US$52 million, to pursue its digital…

 

WSO2 to help MOSIP’s passwordless authentication platform eSignet Go Thunder

IIIT-Bangalore, home to India’s burgeoning digital public goods efforts, has formed a partnership through the MOSIP initiative it hosts with…

Comments

One Reply to “For ChatGPT, OpenAI rolls out age inference system similar to YouTube’s”

  1. It is also important to note that for a new or incognito user, no platform, not even AI, can predict or estimate age accurately as their is no data to analyse. How many prompts are required to reach a reliable conclusion about a user’s age? If you want 95% accuracy – the benchmark voluntarily adopted for the UK’s “highly effective” age assurance, how long will that take a user to demonstrate with a statistical certainty?

    OpenAI is to be congratulated for applying safeguards until it knows a user is an adult – “When we are not confident about someone’s age or have incomplete information, we default to a safer experience.” But how long will the training wheels remain in place? For many adult users, the option of a voluntary age assurance process will be more attractive than waiting to be recognised as 18+

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events