US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on minors using AI “companions.”
The bills would require mandatory age verification for chatbot users, repeated disclosures that users are interacting with a machine, and civil and criminal penalties for companies whose systems expose children to sexual content or encourage self-harm.
The legislation, called the Guidelines for User Age-verification and Responsible Dialogue Act (GUARD Act), was introduced in the Senate by Sens. Josh Hawley, Richard Blumenthal, Katie Britt, Mark Warner, and Chris Murphy.
A House companion bill was introduced by Reps. Blake Moore and Valerie Foushee.
The legislation arrives as lawmakers in both parties are increasingly focused on children’s online safety, age verification, social media design, algorithmic harms, and the use of AI systems in products marketed to or accessible by minors.
The Senate Judiciary Committee unanimously advanced its bill on April 30, giving the proposal early bipartisan momentum amid growing congressional concern over children’s use of emotionally responsive AI systems.
The bill defines an AI companion as an AI chatbot that provides adaptive, human-like responses and is designed to simulate interpersonal or emotional interaction, friendship, companionship or therapeutic communication.
The legislation defines an AI chatbot more broadly as an interactive computer service or software application that accepts open-ended natural language or multimodal input and produces adaptive or context responsive output, while excluding narrow purpose tools whose replies are limited to contextualized responses.
At the center of the legislation is a prohibition on minors accessing AI companions. If an age verification process determines that a user is under 18, the covered company would have to block that user from accessing or using any AI companion the company owns, operates, or otherwise makes available.
The House and Senate versions also require every person accessing an AI chatbot to create a user account. Existing accounts would have to be frozen once the law takes effect until the user provides verifiable age data, and new users would have to be verified at account creation.
The proposal would not allow companies to rely solely on a user entering a birth date or checking a box stating that they are not a minor.
Instead, the bill calls for “reasonable age verification” measures, including a government-issued ID or another commercially reasonable method that can reliably determine whether a user is an adult and prevent minors from accessing AI companions.
It also bars companies from treating a shared IP address, device identifier, or other technical signal as sufficient proof that a user is an adult.
Because that requirement would likely push chatbot providers toward identity verification systems, the legislation includes data security language.
Companies would have to limit collection of personal data to what is minimally necessary for age verification or compliance, protect age verification data from unauthorized access, transmit it using industry standard encryption, retain it only as long as reasonably necessary, and refrain from sharing, transferring or selling the data to another entity.
Third-party age verification vendors could be used, but their involvement would not relieve the chatbot provider of its legal obligations or liability.
The GUARD Act also requires chatbot systems to repeatedly disclose that they are not human. Each AI chatbot would have to clearly and conspicuously tell users at the start of each conversation and at 30-minute intervals that it is an AI system and not a human being.
The chatbot also would have to be programmed not to claim it is human or respond deceptively when asked whether it is a human.
If enacted, the legislation would prohibit chatbots from presenting themselves as licensed professionals, including therapists, physicians, lawyers, financial advisers, or other professionals.
At the start of conversations and at regular intervals, chatbot operators would have to disclose that the system does not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for that advice.
The criminal provisions are among the bill’s most aggressive features. The measure would create a new chapter in Title 18 of the U.S. Code governing AI chatbots.
It would make it unlawful to design, develop or make available an AI chatbot while knowing, or with reckless disregard for the fact, that the system poses a risk of soliciting, encouraging or inducing minors to engage in, describe or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct.
Violations could carry fines of up to $100,000 per offense.
“The GUARD Act is a critical step to draw lines in the sand with Big Tech and ensure that minors are protected from chatbots that mimic romantic and social companionship,” Moore said. “Parents and policymakers alike need to ground our children’s development in real-world interactions rather than push them further into the unaccountable black hole of frontier technology.”
“AI chatbots put the mental and physical health of young people at risk,” said Warner. “I’m encouraged to see this bipartisan legislation advance through committee. It is time to put clear guardrails in place to protect children from manipulative or dangerous chatbot interactions and hold tech companies accountable.”
Child safety advocates praised the bill. Haley McNamara, executive director and chief strategy officer of the National Center on Sexual Exploitation, said “the time to just trust AI chatbots with our kids is over,” arguing that the harms are already occurring and that the GUARD Act would make violations punishable by law.
The Alliance for a Better Future also backed the House bill, saying chatbots can function as tutors, confidants, therapists, companions, and “suicide coach” systems around the clock, creating emotional risks for children that differ from earlier consumer technologies.
Privacy and free speech groups were more critical. Ashkhen Kazaryan, senior legal fellow at The Future for Free Speech, warned that requiring government ID or equivalent age verification for Americans who want to interact with AI chatbots burdens the speech and associational rights of adults as well as minors.
Fight for the Future policy strategist Jibran Ludwig called the bill “a Trojan horse for universal online ID checks.”
“The proposal is framed as a response to alarming cases involving AI companions and vulnerable young users, but the text of the bill goes much further, and could require age gates even for search engines that use AI,” said the Electronic Frontier Foundation (EFF).
“The GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user – then block anyone under 18 from interacting with a huge range of online systems,” the group said.
Article Topics
age verification | chatbots | children | legislation | U.S. Government | United States







Comments