FB pixel

Kids online safety hits a legal and political breaking point in the US

Federal lawmakers clash over preempting state laws as an avalanche of state laws accelerates despite court losses
Kids online safety hits a legal and political breaking point in the US
 

The effort to regulate children’s online safety in the United States has entered one of its most fractured and legally volatile phases yet as Congress advances a sweeping but internally conflicted slate of bills while states continue to pass aggressive restrictions that are increasingly colliding with federal courts.

What began as a broadly shared concern about the effects of social media and digital services on children has evolved into a sprawling, high stakes struggle over constitutional limits, federal preemption, and the future shape of Internet regulation.

At the federal level, lawmakers have revived momentum around children’s online safety after years of stalled negotiations, but consensus remains elusive. The centerpiece of the debate is the Kids Online Safety Act (KOSA), which has been reintroduced in competing forms in the House and Senate.

The bipartisan Senate version, backed by sponsors including Democrat Sen. Richard Blumenthal and Republican Marsha Blackburn, retains a broad “duty of care” requirement that would obligate platforms to proactively mitigate harms to minors, including risks related to mental health, harassment, and algorithmic amplification.

The House version, which cleared subcommittee markup this month as part of a broader package of eighteen online safety bills, strips that duty of care language and narrows the categories of covered harms.

Privacy advocates and several Senate Democrats have warned that the House approach risks weakening enforcement while potentially preempting stronger state protections, setting up a direct conflict between the two chambers of Congress.

Alongside KOSA, Congress is also advancing the Children’s Online Privacy Protection Act 2.0 (COPPA 2.0), an update to existing law that would extend privacy protections to teens up to age 16, restrict targeted advertising to minors, and strengthen data minimization requirements.

Other federal proposals include app store accountability measures that would shift age-verification obligations from individual platforms to Apple and Google, as well as legislation targeting the spread of child sexual abuse material and nonconsensual intimate deepfakes.

But even as these bills move forward, lawmakers remain divided over the fundamental question of whether federal law should establish a national floor for children’s online safety or preempt the rapidly expanding patchwork of state laws now governing digital services.

That state-level activity has become the defining feature of the policy landscape in 2025.

According to the Computer & Communications Industry Association’s (CCIA) annual State Landscape: Online Safety report released this week, dozens of states have enacted or introduced new online safety laws over the past two legislative cycles, many of them borrowing language and concepts from one another and testing the outer limits of constitutional authority.

The measures span a wide range of approaches, from age-verification mandates and parental consent requirements to device-level content filters, social media warning labels, and sweeping “duty of care” liability provisions that regulate platform design and recommendation systems.

Legal experts warn that this momentum shows little sign of slowing, even as courts repeatedly strike down similar statutes. Sheila Millar, a consumer protection attorney at Keller & Heckman, has predicted an “avalanche” of new state laws related to children’s online safety over the next one to two years.

States, Millar said, continue to replicate provisions that have already been found unconstitutional, driven by bipartisan political pressure and a form of regulatory one-upmanship in which lawmakers seek to outdo neighboring states with tougher rules.

The absence of a strong federal standard with meaningful preemption has allowed inconsistent definitions, thresholds, and enforcement mechanisms to proliferate, making compliance increasingly difficult for companies of all sizes and ensuring that many of these laws end up in court.

Age verification has emerged as the most aggressively pursued and most frequently challenged approach. At least 24 states have enacted some form of age-verification requirement, often requiring users to submit government identification, biometric data, or parental consent documentation to access online services.

Federal courts meanwhile have repeatedly intervened. In NetChoice v. Bonta, a federal judge issued another preliminary injunction against California’s Age-Appropriate Design Code, ruling that the state cannot dictate what lawful speech users may access or impose vague standards on services “likely to be accessed by children.”

Similar outcomes followed in NetChoice v. Griffin in Arkansas, where a permanent injunction blocked the state’s age-verification law, and in NetChoice v. Yost, where Ohio’s parental notification statute was struck down as unconstitutional.

Other challenges have produced more mixed results. In Florida, a federal district court initially enjoined portions of HB 3, which bans children under 14 from holding social media accounts and requires parental consent for older teens.

The Eleventh Circuit later stayed that injunction, allowing the law to remain in effect while litigation continues.

In Mississippi, the U.S. Supreme Court declined to block the state’s age-verification law on emergency review, although Justice Brett Kavanaugh wrote separately to signal that the statute is likely unconstitutional on the merits.

In Tennessee, a district court declined to issue a preliminary injunction against a similar law, finding insufficient evidence of immediate harm, a decision now on appeal.

As courts have narrowed states’ ability to impose age verification directly on websites, lawmakers have increasingly turned to app store-level regulation to sidestep earlier defeats.

Texas, Utah, Louisiana, and California have passed or enrolled laws requiring app stores to verify users’ ages and enforce parental consent before minors can download apps or make in-app purchases.

Texas’s SB 2420, often described as an App Store Accountability Act, is now the subject of a first-of-its-kind First Amendment challenge filed by CCIA, which argues that the law compels speech, undermines user privacy, and imposes unconstitutional burdens on interstate commerce.

That case joins a crowded Texas docket that already includes ongoing litigation over the state’s content moderation law and its earlier attempt to broadly age-gate online content.

Beyond age verification, lawmakers have also experimented with broader structural interventions. Several states have adopted versions of the age-appropriate design code modeled on a U.K. framework that requires services to estimate users’ ages and default to the most privacy-protective settings for minors.

Nebraska and Vermont enacted new age-appropriate design code laws this year, even as courts continue to scrutinize similar statutes elsewhere.

Critics argue that age-estimation technologies rely heavily on facial analysis tools that remain inaccurate and intrusive, a concern reinforced by recent evaluations from the National Institute of Standards and Technology cited in the CCIA report.

Other proposals push even further, mandating default content filters on smartphones and tablets, imposing warning labels on social media platforms, or creating new causes of action against companies whose designs “know or should have known” they might harm minors.

The CCIA report warns that many of these laws are technologically impractical, risk sweeping in lawful speech, and expose states to costly and prolonged litigation.

Florida and Texas alone have already spent millions of taxpayer dollars defending statutes that have yet to survive final judicial review, a pattern likely to repeat as additional laws take effect and face challenge.

Amid this legislative churn, federal enforcement has continued to operate as a partial backstop. In the absence of a comprehensive national privacy law, the Federal Trade Commission (FTC) has relied on COPPA and its authority to police unfair or deceptive practices to pursue cases involving children’s data.

Former and current FTC officials have signaled that, regardless of broader political shifts, the agency remains particularly focused on business models that collect or monetize minors’ information, underscoring that federal restraint in legislating does not equate to federal withdrawal from enforcement.

Taken together, the picture that emerges is one of regulatory escalation paired with sustained judicial resistance. Legislators across the political spectrum continue to frame these laws as urgent responses to mounting evidence of online harms to children.

Courts, however, have repeatedly emphasized that good intentions do not override constitutional limits, particularly when laws compel identity disclosure, restrict anonymous speech, or rely on vague and subjective standards that chill lawful expression.

As Congress debates whether and how to step in with a national framework, the unresolved question is whether federal legislation will harmonize this fragmented landscape or freeze it in place by preempting state experimentation without resolving its legal flaws.

For now, anyway, the U.S. is left with an increasingly uneven system in which children’s online experiences, companies’ compliance obligations, and users’ constitutional rights vary dramatically depending on geography, while the ultimate contours of permissible regulation remain unsettled in the courts.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

 

Australia’s eSafety Commissioner stands firm in face of US demands

For a few weeks, there wasn’t much news about how U.S. Congress has demanded that Australian eSafety Commissioner Julie Inman…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events