Congress, states crack down on AI sexual deepfakes

Congress and state attorneys general are moving fast to crack down on AI-generated sexual imagery even as Washington has yet to enact laws to curb political deepfakes before the 2026 elections.
On May 19, President Donald Trump signed the Take It Down Act, the first federal statute to directly target the non-consensual posting and distribution of intimate imagery, including AI-generated “deepfakes.”
The law requires covered platforms to remove flagged images within 48 hours of a valid notice and gives the Federal Trade Commission (FTC) power to enforce takedown notices. It also creates new criminal penalties for individuals who publish intimate depictions without consent. Platforms have one year to build the mandated reporting and removal systems; the criminal provisions are already in effect.
After the Senate’s February vote, the bill’s sponsor, Republican Sen. Ted Cruz, framed the measure as a practical tool for victims. “The Take it Down Act gives victims of revenge and deepfake pornography, many of whom are young girls, the ability to fight back,” he said.
Following the bill’s signing ceremony in the White House Rose Garden, Cruz called the legislation “an historic win for victims” and praised the “bravery and dedication of Elliston Berry,” the Texas student whose case helped galvanize action.
Berry, who attended the signing, described what the months of waiting for accountability felt like as explicit fakes of her spread among peers. “I had PSAT testing the next day, the last thing I need was to wake up and find out that someone made fake nudes of me,” she told CBS News, adding, “I can’t go back and redo what he did, but instead, I can prevent this from happening to other people.”
The new federal law sets a floor, and states have been racing to erect their own guardrails. Maryland’s SB 360, effective July 1, broadened its “revenge porn” statute to include computer-generated depictions and strengthened civil remedies; criminal penalties can reach two years in prison and $5,000 in fines.
Texas, meanwhile, enacted Senate Bill 20, the Stopping AI-Generated Child Pornography Act, which created new offenses for “obscene visual material” that appears to depict a minor, whether real, animated, or AI-generated. The bill took effect September 1.
Washington state expanded its framework as well, criminalizing the willful distribution of forged digital likenesses and reinforcing protections against fabricated intimate images.
State attorneys general are also marshalling coordinated pressure on the tech ecosystem that enables sexual deepfakes. In late August, a bipartisan coalition announced letters to major search engines and payment platforms urging concrete steps to restrict “nudify” and “undress” tools and to cut off monetization of businesses that sell them.
California Attorney General Rob Bonta said the aim is to push companies that “are indirectly part of the ecosystem” to become “part of the solution,” noting that such images have been used to “bully, harass, and exploit people all over the world.”
Massachusetts Attorney General Andrea Joy Campbell co-led a bipartisan coalition of 47 attorneys general in calling on major search engines and payment platforms to take stronger action against the increasing spread of computer-generated deepfake nonconsensual intimate imagery.
In a letter to search engines, the coalition outlined the failures of these companies to limit the creation of deepfakes and called for stronger safeguards such as warnings and redirecting users away from harmful content to better protect the public.
In a separate letter to payment platforms, the coalition urged these companies to identify and remove payment authorization for the creation of deepfake pornography content.
By contrast, the policy response to political deepfakes remains piecemeal. Congress has floated the REAL Political Advertisements Act and the Protect Elections from Deceptive AI Act – proposals that would require disclosures or ban materially deceptive AI in campaign ads – but neither has become law.
In the absence of legislation, federal action has come through agencies. In February 2024, the Federal Communications Commission (FCC) unanimously clarified that AI-generated voice calls fall under the Telephone Consumer Protection Act’s restrictions on “artificial or prerecorded voice,” making such robocalls unlawful without the required consent.
The move followed an episode in which calls mimicking President Joe Biden’s voice targeted New Hampshire voters. In August 2024, the FCC opened a rulemaking to require on-air and written disclosures for AI-generated content in political ads on broadcast, cable and satellite. The proposals are still pending.
States are testing the limits of what can be policed around election speech and are running into the First Amendment. California’s attempts have repeatedly been enjoined. In October 2024, a federal judge blocked AB 2839, which created private lawsuits over election deepfakes, calling it a “blunt tool” that risks suppressing protected expression.
In August, a federal judge likewise struck down AB 2655, known as the Defending Democracy from Deepfake Deception Act, which required large platforms to block or label certain “materially deceptive” election content around voting periods. X Corp., among others, challenged the law. The court found it likely unconstitutional.
Minnesota’s election deepfake statute is also in court. Litigation brought by X Corp. argues that the law criminalizes protected political speech and impermissibly burdens platforms.
Those challenges, along with California’s setbacks, have chilled other states considering similar bans. Even so, as of July 10, 28 states had enacted laws covering political deepfakes.
A separate federal debate centers on likeness and voice rights. The bipartisan No Fakes Act would give individuals federal control over “digital replicas,” set up a DMCA-style takedown process, and create a private right of action with carve-outs for news and parody.
Entertainment and music groups have rallied behind the bill. Tech platforms like YouTube signaled support this spring, and senators held a May 21 hearing to air the case for action. Critics, including civil liberties advocates and some academics, warn that sweeping protections could chill lawful parody and remix culture.
Outside politics, the fraud front is widening. On September 3, the American Bankers Association (ABA) Foundation and the FBI released a public-facing infographic explaining how manipulated images, video and audio are fueling impostor schemes and other consumer scams.
The ABA highlighted FBI figures, noting that since 2020, consumers have filed more than 4.2 million fraud reports totaling over $50.5 billion in losses, with a growing slice involving deepfake-enabled deception.
“Deepfakes are becoming increasingly sophisticated and harder to detect,” said Sam Kunjukunju, the ABA Foundation’s vice president of consumer education. “This infographic provides practical tips to help consumers recognize red flags and protect themselves.”
Jose Perez, the FBI’s Criminal Investigative Division assistant director, said educating the public is essential “so they can spot deepfakes before they do any harm.”
The ABA Foundation will revive its #BanksNeverAskThat and #PracticeSafeChecks campaigns in October and continues its Safe Banking for Seniors program for elder-fraud prevention.
The uneven legal map reflects a broader trend. Since 2019, states have passed deepfake laws at a blistering pace. By late July, 47 states had at least one deepfake law on the books across categories. Many of these measures focus on sexual imagery, reflecting a political consensus that intimate abuse is both pervasive and addressable without entangling core political speech.
The Take It Down Act imposes a national baseline against non-consensual intimate imagery – with both a criminal backstop and civil enforcement by the FTC – while leaving political content squarely in constitutional crosswinds.
Under the new statute, covered platforms must build 48-hour takedown pipelines by May 19, 2026, and are shielded when they remove reported content in good faith. Individuals who knowingly publish intimate fakes face criminal exposure.
But for election deepfakes, voters are still largely reliant on a shifting patchwork of state rules, defamation law, platform policy and, if the FCC finishes its rulemaking, broadcast-ad disclosures that wouldn’t reach most online channels.
Advocates who championed the Take It Down Act – some of whom simultaneously warned about its potential for misuse – see it as a long-overdue baseline for intimate abuse, even as they press for careful tailoring elsewhere.
Civil liberties groups, meanwhile, have urged caution about any regime that pushes platforms toward heavy-handed removal, especially in the political realm. The strain between those positions helps explain why Congress moved quickly on sexual deepfakes but not on campaign speech.
Absent further federal action, next year is likely to bring more state experimentation, more court fights, and growing pressure on platforms to draw their own lines before the 2026 election cycle kicks into high gear.
Most organizations playing catch-up on deepfake fraud, but protections are available
Article Topics
AI fraud | deepfake detection | deepfakes | generative AI | legislation






Comments