FB pixel

Deepfake legislation up against constant evolution of generative AI

Researcher Felipe Romero-Moreno examines ‘relentless cycle’ in deepfake detection
Deepfake legislation up against constant evolution of generative AI
 

“Deepfake detection in generative AI: A legal framework proposal to protect human rights” is a newly published research paper by Felipe Romero-Moreno, which “undertakes technical and comparative legal analyses of deepfake detection methods.” It lands in the pages of The Computer Law & Security Review as deepfakes and related fraud attacks continue to plague politics, banking and other sectors – indeed, more and more corners of our lives.

Whether it’s financial losses, the spread of misinformation, sexual exploitation or targeted harassment, the damages from deepfakes can be severe. “Projected financial losses alone are expected to surge from $12.3 billion in 2023 to a staggering $40 billion by 2027,” writes Romero-Moreno. “The World Economic Forum has warned that cyber insecurity, including deepfakes, poses a long-term global risk to supply chains, financial stability, and democratic systems. These detection challenges necessitate an evolution in technical and regulatory frameworks.”

And yet, “comprehensive analysis of how diverse global regulations address these very detection technologies remains limited.” This is what Romero-Moreno sets out to provide.

Quickly, his survey of existing technical approaches to deepfake detection encounters a fundamental problem. “Traditional detection methods include artifact-based methods (scrutinizing inconsistencies), behavioral biometrics (e.g., typing speed), physiological signal analysis (e.g., heart rate), and deep learning-based methods (identifying manipulation patterns),” he says. “Multimodal hybrid approaches combine these techniques, analyzing video, audio, images, and text. Beyond these, promising solutions include liveness detection, Zero-Knowledge Biometrics (ZKB), blockchain-based verification, quantum computing, and adversarial training-based methods.”

But: “the challenge lies in deepfakes’ constant evolution.” The deepfake industrial complex does not sleep, and “the technological landscape of deepfake detection is characterized by a relentless cycle of innovation and circumvention.”

“While methods ranging from artifact analysis to advanced AI and techniques like blockchain and quantum computing offer valuable tools, their effectiveness is continuously tested by increasingly sophisticated generation techniques.”

Part of the solution lies in regulations – but these remain patchy throughout the world, “marked by jurisdictional divergences and internal tensions in balancing innovation with fundamental rights.” Ambiguity haunts documents like the EU’s AI Act and Digital Services Act (DSA). The GDPR “simultaneously enables and constrains” deepfake detection efforts.

In the U.S., state-level fragmentation and “the lack of specific federal deepfake legislation severely cripples innovation, leaving individuals vulnerable to deepfake harms.” According to BSA research, U.S. states are introducing more than 50 AI-related bills weekly, including 25 deepfake bills per week. Washington’s recently passed Take It Down Act criminalizes “knowingly publishing or threatening to publish non-consensual intimate imagery, including AI-generated deepfakes;” it has been criticized for how it might enable overreach. An additional five bills related to deepfakes are pending.

Unchecked, the gumbo approach could lead to a brew so messy and haphazard that it’s not worth a spoonful. Instead, Romero-Moreno believes, “a cohesive and globally coordinated response is paramount.”

“For lawmakers and regulators, the imperative lies in establishing harmonized international standards and adaptable national legislation. These frameworks must not only define prohibited uses and establish clear liability for the creation and dissemination of malicious deepfakes but also foster an environment that encourages responsible innovation in detection technologies.”

The author identifies specific regulatory challenges in the UK, China, the EU, the U.S., and looks at UN Resolution 78/265 concerning safe, secure, and trustworthy AI systems. Ethical development and deployment are key across the ecosystem; “this entails prioritizing accuracy, reliability, and transparency through the integration of content provenance standards like C2PA.”

Ultimately, the message is to recognize the complexity of the problem, and assemble the right tools and defenses to fight it as it evolves, employing a “holistic and adaptive strategy, one that intricately weaves together advancements in technical detection, robust ethical considerations, agile governance frameworks, and clearly defined accountability measures across all stakeholders.”

“This requires ongoing dialogue, collaboration, and a sustained shared commitment from all stakeholders to ensure a future where technology empowers and informs, rather than deceives and endangers humanity.”

Patchwork grows as US lawmakers push deepfake laws

Absent a collective global agreement on deepfake regulation like an approved international standard, states and individual politicians continue to work at their individual patches of deepfake legislation. According to deepfake detection firm Reality Defender, as of mid-2025, “nearly every U.S. state has active AI-related bills.”

In the U.S. Senate, Sen. Jon Husted (R-Ohio) has introduced the Preventing Deep Fake Scams Act, a bipartisan, bicameral bill to address data and identity theft or fraud fueled by deepfakes. “Scammers are using deep fakes to impersonate victims’ family members in order to steal their money,” Husted says in a release. “As fraudsters continue to scheme, we need to make sure we utilize AI so that we can better protect innocent Americans and prevent these scams from happening in the first place.”

Lawmakers in the Pennsylvania Legislature have approved a law to combat the use of deepfakes to mislead voters by impersonating political figures. One of the more notorious deepfake cases to date saw voters in New Hampshire receive calls from a deepfaked Joe Biden, urging them not to vote in state primary elections.

Under the law, campaigns must disclose when they use AI-generated deepfakes in advertising; those who fail to do so can face fines for every day that an ad remains public – according to Lancaster Online, up to $15,000 in municipal elections, $50,000 in state elections and $250,000 in federal elections.

However, this only applies to ads running 90 days before an election or less – a fairly paltry window in today’s environment of perpetual campaigning. Philadelphia Democrat Tarik Khan, who sponsored the bill, says he views the legislation as a “first step toward further regulation.”

Texas gets first law that has consequences for deepfake creation sites

In Texas, Governor Greg Abbott has signed two bills regulating AI-generated intimate deepfakes. Comment from Public Citizen, an advocacy group tracking deepfake legislation, says the bills “include the first law in the country that holds platforms civilly liable for failing to protect minors from this type of media.”

HB 581 assigns civil liability to whomever owns the tool that was used to make the offending deepfake. For instance, under the law, the text-to-speech engine that was used to generate the Biden audio would be liable. SB 441, meanwhile, “criminalizes threatening to create an intimate deepfake to coerce, extort, harass, or intimidate another person.”

The vast majority of deepfakes used for nefarious purposes are of girls and women.

“The creation and dissemination of non-consensual intimate deepfakes can inflict damage that can last a lifetime,” says Adrian Shelley, Texas director of Public Citizen. “Across the country, state legislatures are recognizing this harm and are taking proactive steps to rein in their distribution, while holding those who produce them accountable.”

Get ahead of deepfake detection: Reality Defender

Reality Defender provides a good roundup of global deepfake laws in a recent post, “State of Deepfake Regulations in 2025: What Businesses Need to Know.”

“The regulatory timeline is accelerating,” writes Reality Defender VP of Human Engagement Gabe Regan. “As regulatory pressure around deepfakes grows, companies need to focus on three core compliance priorities: detection capabilities, clear disclosure policies and incident response plans tailored to synthetic media threats.”

“Staying ahead means using tools like industry information-sharing networks (ISACs), regulatory monitoring platforms and enterprise AI governance frameworks. Early movers gain a competitive edge by reducing risk and influencing emerging standards.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Taiwan gathers perspectives on digital wallet as national infrastructure

Taiwan’s Ministry of Digital Development has concluded a series of workshops on the digital ID  wallet, bringing together experts and…

 

Idemia PS to share inside look at multi-modal biometrics registration solution

Idemia Public Security has upgraded its LiveScan series of biometric enrollment workstations with the Touch Print Enterprise 6, and presents…

 

China toughens rules on private FRT while consolidating Beijing’s digital rule

Recipes for digital control vary by region. In China, the Beijing government is stirring in two directions, as it moves…

 

EU confirms 180-day launch schedule for biometric border system

The European Union has confirmed that the bloc’s new biometric Entry/Exit System (EES) will begin a progressive launch, in a…

 

Is the UK falling behind Europe on digital identity security?

By Eleanor Burns, Director at IDnow As the UK accelerates its shift towards digital identity, a crucial question is coming…

 

GenKey banned from World Bank projects for 18 months over documentation failure

The World Bank has censured GenKey over a disclosure failure related to its involvement in Liberia’s social safety net project…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events