AI prompts Newsom to sign new legislation on chatbots, age verification

California governor Gavin Newsom has signed additional legislation aimed at protecting children online, as the state aims to maintain its reputation as a leader in implementing safeguards such as biometrics to counter the risks of new technology.
A release says the policy is motivated by the emergence of AI, and adds a variety of required features for platforms and products. These include age verification by operating system and app store providers, protocols to address suicide and self-harm, clearer warnings regarding social media and companion chatbots, and stronger penalties for exploiting illegal deepfakes.
The chatbot safeguards require so-called companion chatbot platforms to disclose that interactions are artificially generated, and to share protocols for dealing with self-harm and statistics on the volume crisis center prevention notifications. They prohibit chatbots from representing themselves as health care professionals, and from showing sexually explicit images it generates to children.
On deepfakes, the law imposes stronger penalties for deepfake porn by “expanding the cause of action to allow victims, including minors, to seek civil relief of up to $250,000 per action against third parties who knowingly facilitate or aid in the distribution of nonconsensual sexually explicit material.”
It also seeks “clear accountability for harm caused by AI technology by preventing those who develop, alter, or use artificial intelligence from escaping liability by asserting that the technology acted autonomously.” That’s the long way of saying liability lies with human users, and not machines.
Newsom says that “emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We can continue to lead in AI and technology, but we must do it responsibly – protecting our children every step of the way. Our children’s safety is not for sale.”
A ‘public nuisance’ and a ‘health hazard’: NYC fed up with all the posting
Not to be outdone by California, New York City has served major social media companies with a lawsuit, accusing the firms of purposefully designing their products to be addictive to children.
The lawsuit is noteworthy in that it pins public harm on social media firms, accusing them of gross negligence and causing a public nuisance.
“New York City, like other parts of this nation, is battling an unprecedented mental health crisis among its youth and serious disruption to the public health, fueled by Defendants’ creation and promotion of addictive and dangerous social media platforms,” says the 327-page complaint filed in Manhattan federal court.
“Youth are now addicted to Defendants’ platforms in droves, resulting in substantial interference with school district operations and imposing a large burden on cities, school districts and public hospital systems that provide mental health services to youth.”
The complaint makes specific reference to social media’s role in driving “subway surfing,” a stunt that involves riding on the exterior of a moving subway train. A report from Reuters says the trend, driven by a Tiktok challenge, has killed at least 16 people since 2023.
The lawsuit follows a 2024 declaration by New York City’s health commissioner labeling social media a “public health hazard.” Collectively, and as similar sentiments come from the EU, UK and Australia, the implications increasingly look familiar from the widespread effort to put warnings and age restrictions on cigarettes.
New York is seeking damages from Meta Platforms (parent of Instagram and Facebook), Alphabet (parent of Google and YouTube), Snap (owner of Snapchat), and TikTok owner ByteDance, on the grounds that they “exploit the psychology and neurophysiology of youth” for profit, while kids social skills and education suffer as a result.
It joins other governments, school districts and individuals pursuing approximately 2,050 similar lawsuits.
Social media emerging as the smoking of our time
The proverbial writing is on the wall. Facebook was created in 2004, with backing from Peter Thiel – currently shopping theories about the biblical antichrist to Silicon Valley power brokers. In 2006, it opened to anyone who was 13 or over with a valid email address. That’s twenty years of ballooning growth, influence, wealth and political sway for Mark Zuckerberg, Thiel, Elon Musk, the guy who owns Snap, and China.
While social media has had its moments, and proven valuable in uniting certain dissipated communities among the marginalized, it is becoming clear that those are edge cases compared to the scale of the overall problem – similar to an argument in favor of smoking as a social icebreaker. We are beginning to see, document and respond to the damage caused by the massive networks that have eaten up so much of the internet, whose owners have sold them as the modern-day public square.
Mobile content sharing is a mainstay of twenty-first century culture; nobody’s going to stop texting. The smoking analogy is also useful as a framing because the legal action that led to the Tobacco Master Settlement Agreement targeted just four major tobacco companies. The social network may be endless, but the Social Networks are finite, and can be held accountable.
The first article linking smoking to lung cancer and heart disease appeared in 1950 in the UK. The U.S. Surgeon General made the connection in 1964. Major state-sponsored litigation began in earnest in 1994. But culture moves faster now – AI has already seized the spotlight – and what once took four decades could unfold much more quickly for what looks, more and more, like a case of selling out public health for the gain of a few morally bankrupt companies.
Article Topics
age verification | California | chatbots | children | legislation | New York City | social media | United States






Comments