Deepfake crisis gets dire prompting new investment, calls for regulation

Deepfake detection and social engineering defense startup Doppel has raised 70 million dollars in Series C funding. According to Fortune, the company has seen 400 percent growth in its enterprise customer base since 2024, and the fresh investment brings its valuation to 600 million dollars.
Bessemer Venture Partners led the round, with participation from CrowdStrike CEO George Kurtz and NTT Docomo Ventures, Andreessen Horowitz, 9Yards Capital, Script Capital, South Park Commons, Sozo Ventures, Aurum Partners and Strategic Cyber Ventures. It comes six months after the firm’s Series B
The deepfake crisis has already caused some high-profile stirs – but it’s only the beginning. The threat is spurring a wave of deepfake detection startups, each aiming to keep pace with the rapid evolution of deepfake technology.
Kevin Tian, Doppel’s CEO, says that organizations are not prepared for the flood of AI-generated deception coming at them. “Over the past few months, what’s gotten significantly better is the ability to do real-time, synchronous deepfake conversations in an intelligent manner. I can chat with my own deepfake in real-time. It’s not scripted, it’s dynamic.”
Tian tells Fortune that Doppel’s mission is not to stamp out deepfakes, but “to stop social engineering attacks, and the malicious use of deepfakes, traditional impersonations, copycatting, fraud, phishing – you name it.” The firm says its R&D team has “just scratched the surface” of innovations it plans to bring to existing and upcoming products, notably in social engineering defense (SED). The Series C funds will “be used to invest in the core Doppel gang to meet the exponential surge in demand.”
A blog from the company quotes Elliott Robinson, partner at Bessemer Venture Partners, who says “Doppel stands out with its incredible speed, massive ambition, and laser focus on execution, which is a clear recipe for success. We believe these qualities will solidify Doppel as the unquestioned leader in social engineering defense for years to come.”
Extent of risk to likeness starting to become clear
The way things are going, facing the deepfake deluge is going to require all hands on deck. A report from the Associated Press says the nonprofit group Public Citizen is demanding that OpenAI pull Sora 2, its most recent generative video app, from circulation. Like other critics of Sam Altman’s AI powerhouse, Public Citizen accuses Altman of rushing the tool to market and showing a “reckless disregard” for product safety, “as well as people’s rights to their own likeness and the stability of democracy.”
Public Citizen tech policy advocate J.B. Branch, who wrote the letter, says “we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image, the first video that gets released, is what people remember.”
But it’s not just high-profile figures whose likeness is getting co-opted by AI. A particularly disturbing headline from a recent report by 404 Media tells a grim story: “OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled.”
“The videos are usually 10 seconds long and mostly feature a ‘teenage girl’ being strangled, crying, and struggling to resist until her eyes close and she falls to the ground,” says the piece.
Branch says OpenAI is “putting the pedal to the floor without regard for harms. Much of this seems foreseeable. But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users.”
US needs federal law to protect likeness and voice: Scientific American
In light of all this, Scientific American has weighed in on AI regulations – a topic of some urgency, given news that U.S. President Donald Trump intends to sign an Executive Order aiming to block states from enacting their own AI laws, on the grounds that the White House has the sole authority to regulate commerce between states.
The editorial lays out the scope of the problem: “Deepfakes – photographs, videos and audio tracks that use AI to create convincing but entirely fabricated representations of people or events – aren’t just an internet content problem; they are a social-order problem. The power of AI to create words and images that seem real but aren’t threatens society, critical thinking and civilizational stability. A society that doesn’t know what is real cannot self-govern.”
It’s also, specifically, a problem for women, given research that shows 96 percent of deepfakes are non-consensual and that 99 percent of sexual deepfakes target women. So-called nudify apps have proliferated, allowing anyone to create explicit deepfakes of whomever they please with just a photograph.
“In a survey of more than 16,000 people across 10 countries, 2.2 percent of them reported having been victims of deepfake pornography,” says the piece. “The Internet Watch Foundation documented 210 web pages with AI-generated deepfakes of child sexual abuse in the first half of 2025 – a 400 percent increase over the same period in 2024.”
Advocating for “laws that prioritize human dignity and protect democracy,” the piece points to the EU’s AI Act and Digital Services Act as models, and specifically to new copyright legislation in Denmark, which bans the creation of deepfakes without a subject’s consent. In the authors’ words, Denmark’s law would “legally enshrine the principle that you own you.”
In that, it is part of a growing recognition that, in a deepfaked world, everyone’s likeness is at risk of being stolen.
The authors’ conclusion is clear and (so to speak) actionable: “we should adopt a federal law protecting one’s right to their likeness and voice. Doing so would give people legal grounds to demand fast removal of deepfakes and the right to sue for meaningful damages.” It says the proposed NO FAKES (“Nurture Originals, Foster Art, and Keep Entertainment Safe”) Act would protect performers and public figures from unauthorized deepfakes – but should protect everyone.
“The rise of deepfake technology has shown that voluntary policies have failed; companies will not police themselves until it becomes too expensive not to do so,” says the piece. “This is why Denmark’s approach is not just innovative; it’s essential. Your image should belong to you. Anyone who uses it to their own ends without your permission should be in violation of law.”
Article Topics
crowdfunding | deepfake detection | deepfakes | Doppel | investment | OpenAI | regulation | Sora






Comments