Kids Off Social Media Act gains House backing as Senate advances bill

A bipartisan coalition of lawmakers led by Rep. Anna Paulina Luna have introduced the House companion to S.278, the Kids Off Social Media Act (KOSMA), which the Senate Commerce Committee advanced last week.
The bill is one of the most sweeping federal attempts yet to limit young peoples’ access to social media. It would prohibit children under 13 from creating or maintaining social media accounts and restrict how platforms engage teenagers under 17.
The bill reflects a growing bipartisan consensus that the architecture of modern social platforms – particularly algorithmic recommendation systems designed to maximize engagement – may be contributing to rising concerns about youth mental health, compulsive use and exposure to harmful content.
Debate on the bill comes as the world’s biggest social media companies face several landmark trials this year which seek to hold them responsible for harms to children.
Yesterday, a coalition of 40 attorneys general sent a letter to Congress supporting passage of the bill. Some, like California Attorney General Rob Bonta, urged policy makers to pass nationwide legislation that does not preempt California’s own robust laws to protect children online.
“Protecting kids online is not a partisan issue. Parents across America are sounding the alarm about the real harms social media is causing, from anxiety and depression to exposure to dangerous content,” Luna said.
“Social media is a leading driver of poor youth mental health,” Sen. Brian Schatz said when S.278 was introduced in December. “Numerous studies show that the more children and teens use social media, the higher their risk of being depressed. Similarly, studies have revealed that when children and teens reduce or eliminate exposure to social media for longer than a month, their mental health benefits.”
The legislation would codify into federal law what many companies already claim to enforce through their terms of service, establishing a minimum age of 13 to hold an account. Unlike existing platform policies, however, the measure would make compliance legally enforceable.
If a company knowingly allows a child under 13 to maintain an account, it would be required to delete both the account and associated personal data. Enforcement authority would rest primarily with the Federal Trade Commission, with state attorneys general empowered to bring civil actions as well.
Beyond the age floor, the bill takes aim at one of the most contested features of social media, namely personalized recommendation algorithms. For users under 17, platforms would be prohibited from deploying algorithmic feeds that curate content based on behavioral data, engagement history or inferred interests.
Teens could still access chronological feeds or search for specific content, but the automated systems that power infinite scroll, autoplay and hyper-personalized recommendations would be off limits.
Supporters argue that these features are intentionally engineered to drive prolonged engagement and may amplify anxiety, body image pressures and exposure to harmful material.
Critics counter that algorithmic systems can also surface beneficial or educational content and that blunt prohibitions may produce unintended consequences.
The Information Technology and Innovation Foundation said in a statement last week that, “like other recent children’s online safety bills, KOSMA has many flaws, namely that it complicates compliance for platforms that already disallow children below age 13 and limits users’ ability to fully customize their online experience.”
The bill would extend into schools by requiring federally supported elementary and secondary institutions to block access to social media on school networks and devices.
This provision builds on existing internet safety obligations tied to federal funding and reflects lawmakers’ view that limiting in-school access is part of a broader digital health strategy.
The push in Congress mirrors a rapidly intensifying movement in Europe, where multiple European governments are exploring or advancing age-based restrictions on social media, forming what observers have described as a “coalition of the willing.”
Countries including France, Spain, Greece, the Netherlands, Denmark, Ireland and the United Kingdom are considering new minimum age thresholds or stricter parental consent requirements.
In France, lawmakers have moved toward barring children under 15 from social media. Spain and Greece are examining similar limits in the 15 to 16 range. Policymakers in several countries are also debating harmonized standards that could eventually apply across the European Union.
European discussions are occurring alongside enforcement of the EU’s Digital Services Act, which already imposes heightened obligations on large platforms to protect minors from harmful content and addictive design.
Regulators have signaled particular concern about features that encourage compulsive use, including endless scrolling and targeted recommendations. The debate increasingly centers not only on access but on the design of platforms themselves.
Australia’s decision to implement a nationwide under-16 social media ban, which took effect in late 2025, has added momentum to the global conversation.
Lawmakers in both the U.S. and Europe have pointed to Australia as evidence that comprehensive age-based restrictions are politically and administratively feasible, though questions remain about enforcement, circumvention and broader social effects.
At the heart of these proposals lies a tension between child protection and digital rights. Age-based bans and algorithmic restrictions inevitably raise questions about how platforms will verify users’ ages.
More robust age assurance systems could involve digital identity checks, document verification or even biometric technologies, all of which carry privacy implications.
Civil liberties advocates warn that forcing platforms to confirm age more aggressively may lead to expanded data collection, potentially creating new risks in the name of protection.
In the U.S., constitutional considerations loom as well. Any federal restriction on access to online platforms intersects with First Amendment protections for speech and association.
Courts have previously struck down or narrowed laws that overly restrict minors’ access to online content. If enacted, the Kids Off Social Media Act would almost certainly face judicial scrutiny testing whether its provisions are narrowly tailored to serve a compelling governmental interest.
Despite these uncertainties, the political momentum is unmistakable. Lawmakers across party lines increasingly frame youth social media use as a public health issue rather than simply a matter of parental oversight.
The Kids Off Social Media Act represents a shift from incremental transparency requirements toward more direct structural intervention in how platforms operate.
Whether the bill ultimately becomes law, its introduction signals a broader realignment in digital policy. The question is no longer whether governments will intervene in youth social media access, but how far they are willing to go, and what trade-offs they are prepared to accept in the process.
Article Topics
age verification | children | Kids off Social Media Act (KOSMA) | legislation | social media | U.S. Government | United States







Comments