Senate hearing on kids and screens opens the door to expansive tech enforcement

When the Senate Committee on Commerce, Science, and Transportation convened its “Plugged Out” hearing last week on the impact of technology on America’s youth, lawmakers framed the discussion as a reckoning with a mounting crisis. They warned that youth mental health is deteriorating, attention spans are fraying, and childhood itself has been reshaped by smartphones, social media, and algorithmic feeds optimized for engagement rather than well-being.
But beneath the shared concern over harm, the hearing exposed a far more consequential and unresolved question, which is, who should hold power over children’s digital lives – families, the state, or the very technology companies Congress increasingly distrusts.
Notably absent from the hearing were the voices of teenagers who will be directly affected by whatever new restrictions Congress ultimately adopts, even as lawmakers debated how and whether young people should be permitted to participate online.
At the same time, the youth safety debate is increasingly colliding with a broader policy shift toward normalizing identity verification and behavioral inference as prerequisites for participation in digital life.
Committee Chair Ted Cruz, a Republican and Trump administration stalwart, cast the issue as one of structural failure. Childhood, he argued, has been hijacked by online platforms that are designed to addict, manipulate, and monetize attention, leaving parents overwhelmed and children exposed.
Cruz positioned the hearing as part of a legislative push behind the Kids off Social Media Act (KOSMA), a bipartisan bill he co-sponsored that would prohibit children under 13 from holding social media accounts and restrict algorithmic content recommendations for users under 17.
Cruz framed the proposal as a tool to “empower parents” and restore boundaries that technology has eroded. He also tied youth exposure to a darker enforcement narrative, invoking the rapid proliferation of AI-generated sexual abuse material and deepfakes involving minors.
In his telling, social media platforms and emerging AI systems are not merely distracting children, they are actively placing them at risk of exploitation, coercion, and psychological harm. The implication was clear: delay carries consequences, and Congress must act decisively.
“Parents are fighting a constant battle to keep their children safe in a rapidly evolving digital world … there’s been a stunning increase in exploitative AI-generated deepfake revenge porn and images of victims – often teenage girls,” Cruz said, adding that “there’s clearly more work to do to protect children online. Given the prevalence of online devices, children are often introduced to screens at a young age and use them for a significant portion each day.”
Ranking Member Maria Cantwell, a Democrat, largely agreed with the diagnosis but diverged sharply on the cure. Rather than focusing on access bans, Cantwell emphasized surveillance-based business models and the absence of comprehensive privacy protections.
Cantwell argued that youth harm cannot be disentangled from the data economy that incentivizes platforms to track behavior, personalize content, and optimize engagement.
In her questioning, Cantwell repeatedly returned to AI, warning that AI-powered “companion” systems represent a new and more intimate form of influence, one that simulates emotional reciprocity and could entrench dependency in ways social media never fully achieved.
Witness testimony reinforced both urgency and complexity. Dr. Jean Twenge, professor of psychology at San Diego State University, pointed to a clear inflection point around 2012, when smartphones and social media became ubiquitous among adolescents, aligning with measurable increases in depression, anxiety, and self-harm.
Cognitive neuroscientist Dr. Jared Cooney Horvath, director of LME Global, shifted attention to schools, warning that federal policy has quietly transformed classrooms into a mass distribution channel for screen exposure through one-to-one device programs and unvalidated educational technologies.
Emily Cherkin, author and founder of The Screentime Consultant, framed the issue as a civic crisis, arguing that childhood has been reorganized around screens in ways that undermine creativity, learning, and democratic participation.
Pediatrician and child development researcher Jenny Radesky, associate professor of pediatrics at the University of Michigan Medical School and chair of the American Academy of Pediatrics’ Council on Communications and Media, offered perhaps the most consequential reframing.
Drawing on her clinical work and leadership of the first behavioral research team at the Federal Trade Commission (FTC), Radesky cautioned against treating “screen time” as the core metric. The real harm, she argued, flows from product design choices – engagement optimization, persuasive interfaces, and personalization driven by behavioral data.
Without confronting those incentives, she warned, any policy focused solely on time limits or age thresholds risks missing the point.
That tension between design-based reform and access-based bans ran throughout the hearing, but was never fully resolved. And it is precisely at that fault line that external critiques of KOSMA become essential to understanding what the hearing set in motion.
In a sharply worded analysis published the following day, Electronic Frontier Foundation senior policy analyst Joe Mullin argued that KOSMA does not, in practice, empower parents at all. Instead, he warned, it hands unprecedented authority to Big Tech by forcing platforms to police family decisions under threat of legal liability.
Mullin noted that children under 13 are already barred from social media under longstanding platform policies rooted in federal privacy law, particularly the Children’s Online Privacy Protection Act (COPPA). The persistence of underage use, he argued, is not primarily the result of deception, but of family-mediated decisions made openly and with parental knowledge.
Studies back that claim. Research cited by Mullin shows that most under-13 children who use social media do so with parental awareness or direct assistance. KOSMA, however, contains no exception for parental consent, shared family accounts, or supervised educational use.
If a platform “knows,” or can reasonably infer, that a child under 13 is using an account, it is legally required to terminate it. Knowledge under the bill includes what is “fairly implied” by usage patterns, a constructive knowledge standard that leaves platforms little room to avoid liability without aggressive monitoring and inference systems.
Age inference systems do not need a user to say, “I’m 12.” Instead, they estimate age based on patterns that, in aggregate, correlate with minors. The system outputs a confidence score or age range, not a definitive fact. Policy and enforcement rules then decide what level of confidence triggers action.
Once a confidence threshold is crossed, internal policy may require action to avoid liability. Under a law like KOSMA, failing to act after receiving such signals could be interpreted as willful blindness. Modern age-inference systems are AI systems – specifically machine-learning models – though they are rarely labeled that way in public-facing policy language.
Critics argue that KOSMA-style laws don’t reduce platform power. Instead, they formalize algorithmic judgment as law, turning probabilistic inference into a gatekeeper for participation in digital life.
Under KOSMA, enforcement would fall primarily to the FTC and state attorneys general, giving both broad discretion to interpret platform “knowledge” of a child’s age. The practical consequence, Mullin argues, is that platforms would err on the side of enforcement, demanding proof of age through government IDs, biometric scans, or other intrusive verification methods.
Parents would not gain control; they would lose it, as compliance decisions are handed to corporate algorithms and legal teams. In effect, KOSMA would deputize the very companies lawmakers accuse of harming children to decide which families retain access and which are locked out.
This critique dovetails uncomfortably with the hearing’s repeated warnings about surveillance and data exploitation.
A bill intended to protect children could instead accelerate the normalization of age-assurance systems that rely on biometric data or probabilistic inference, expanding data collection rather than curbing it. That contradiction went largely unspoken in the hearing room, but it looms large in the policy path ahead.
A broader international perspective further complicates the picture. Analysis from Brookings Institution researchers places KOSMA within a global surge of age-based social media restrictions, including Australia’s forthcoming ban for children under 16 and proposals under consideration in Denmark and the United Kingdom.
While such measures are often popular in polling, the Brookings researchers caution that evidence of their effectiveness remains limited and that enforcement challenges, constitutional concerns, and unintended consequences are already emerging.
The researchers emphasized that bans often fail to reduce overall screen time, instead redirecting young users toward other platforms, gaming, or less regulated spaces online.
In the U.S., state-level bans and parental consent laws have repeatedly faced legal challenges on free expression grounds. Civil liberties groups argue that sweeping age restrictions risk infringing minors’ rights to access information and may disproportionately harm vulnerable groups, including LGBTQ+ youth who rely on online communities for support.
Civil liberties organizations also warn that such restrictions raise serious First Amendment concerns, particularly when access to information and expression is conditioned on age inference and identity verification.
Crucially, the Brookings analysis underscores the same design-based critique voiced by witnesses like Radesky. The harms attributed to social media are not confined to children. Addictive design, algorithmic amplification, and data-driven manipulation affect users of all ages. Targeting access rather than platform behavior may provide political momentum, but it leaves the underlying incentive structures intact.
Together, these critiques expose a central paradox of the hearing. Lawmakers repeatedly condemned engagement-driven algorithms and surveillance-based business models, yet the legislative response gaining traction risks reinforcing those systems by making them instruments of enforcement.
The hearing marked a turning point in tone. Congress is no longer asking whether youth harm exists, but stopped short of resolving how to intervene without expanding corporate or governmental surveillance over everyday family life.
What happens next will likely unfold along two tracks. In the near term, the hearing record will be used to revive momentum behind KOSMA and related youth safety bills, with supporters pointing to AI-generated harms and mental health data as justification for swift action.
At the same time, critics are likely to press constitutional challenges and raise alarms about age verification, biometric collection, and the displacement of parental authority.
Beyond that immediate fight, the hearing may prove more consequential for what it signaled about the future of technology regulation. Cantwell’s focus on privacy and AI companions, combined with witness testimony emphasizing design and data practices, suggests a growing recognition that youth protection cannot be siloed from broader tech governance.
The “Plugged Out” hearing was not merely a hearing about kids and screens. It was an early confrontation with a harder question Congress can no longer avoid, which is whether protecting children online means limiting access, or finally confronting the economic and design choices that have made the digital environment hazardous for everyone.
Article Topics
age inference | age verification | biometric age estimation | children | Kids off Social Media Act (KOSMA) | legislation | social media | U.S. Government | United States







Comments