Potential for deepfake audio, video to warp reality is building to crisis level

Persona has published its fraud insights for the first quarter of 2025, and deepfakes take the headline. “The deepfake fight intensifies,” it says, noting a prediction from Deloitte’s Center for Financial Services that fraud losses for financial institutions and their customers could reach $40 billion by 2027.
As the algorithmic technology used to create deepfakes evolves, the ecosystem grows more complex. Generative AI has enabled huge advancements in synthetic media. Persona points to OmniHuman, a “multimodality-conditioned human video generation model” developed by ByteDance, which can “generate incredibly realistic human videos from just a single image and a motion signal.” Liveness detection and defense against injection attacks are becoming critical; according to Gartner, injection attacks increased by 200 percent in 2023.
On the document front, so-called cheapfake IDs are easier and cheaper than ever to generate, and are still the most common attack vector. But things are getting weirder at a rapid pace. Persona looks at AI fraud agents, quoting a January 2025 report from the U.S. Department of Homeland Security’s Science and Technology Directorate:
“We foresee a future where an interactive AI agent can generate different types of deepfakes to achieve a higher-level goal. This agent will be able to react to the context it operates to make its deception more believable.”
In other words, we have a Skynet situation, in which an algorithm gets smart enough – and learns enough from humans – to orchestrate its own attacks (in this case fraud, rather than James Cameron’s nuclear scenario).
And so the fight, so to speak, is on. Deloitte sees the market for deepfake detection growing from $5.5 billion in 2023 to $15.7 billion in 2026. The potentially severe consequences of a deepfake attack are forcing businesses to reevaluate the importance of trust and safety teams. “Rather than being viewed as a cost center,” Persona says, “leadership might view the fraud team as an important part of building a brand, maintaining trust with customers, and protecting revenue.”
The bottom line is that technology is enabling fraud at scale, and the response by governments has introduced new regulatory challenges. For Persona, the problem is big enough that algorithms alone aren’t enough to address it. “AI-powered models can be part of an effective response, but don’t fall into the trap of over-relying on AI,” it says. “As a business, the best option is to embrace a multi-layered defense that doesn’t rely solely on visual signals.”
Schwalb warns DC: ‘your phone might be lying to you!’
Being frank, big bad deepfakes can sound like an improbable problem, a science fiction threat that gets exaggerated for effect, drawn from the world of Sam Raimi’s Darkman and other masters of disguise. Ironically, the issue comes into clearer focus when the visual element is removed: most people with a phone number have received a scam call from a stranger. The stakes get much higher when the caller sounds like someone you love – that is, when it’s an audio deepfake.
Brian L. Schwalb, attorney general of the District of Columbia, has issued a consumer alert to residents, warning of “sophisticated telemarketing scams that target victims with fake audio or video recordings of people they know, often asking for money to help them get out of an emergency situation.”
“We are witnessing a disturbing upward trend of scammers preying on District residents, particularly seniors, using artificial intelligence to steal their money, sensitive information and data,” Schwalb says in a release. “I urge everyone to be cautious when receiving unexpected calls or messages, especially those that relay an unusual sense of urgency or request personal information.” If Aunt Bea calls needing bail money, for example, it’s probably worth double-checking.
Reality Defender lists 5 transformations in AI voice detection
Rather than trying to get ahold of the real Aunt Bea while the possibly fake Aunt Bea is on hold, one might deploy deepfake detection tools. This applies even more if one is a financial services company. In a post for Reality Defender’s blog, CEO Ben Colman argues that the rapid evolution of voice synthesis and cloning has exposed serious flaws in legacy security systems, and that reliable voice AI detection has “quickly become a cornerstone of modern financial security.”
Colman’s post lists five transformations “redefining how the sector defends against voice-based threats.” They include real-time biometric analysis, multi-layered detection that analyzes metadata and behavioral patterns, integration with existing authentication systems, adaptive machine learning algorithms, and cross-industry threat intelligence sharing.
“As voice AI technology continues to advance, so too will the sophistication of detection methods,” Colman says. “According to Fortune Business Insights, the voice biometrics market is projected to grow from $2.30 billion in 2024 to over $15 billion by 2032, reflecting the increasing importance of this technology in security frameworks.”
Risk rises as tech keeps getting better at digital fakery
Of course, it is not just voice tech that is evolving, and in an interview with Bob’s Guide, Colman predicts that AI-generated video deepfakes will become a more significant concern as the cost of computing decreases.
“In the wrong hands,” he says, “this can create risks unlimited by your imagination.”
Increasingly, imagination isn’t even necessary, as the real things begin warping online reality into a new shape. The founder of Binance, Changpeng Zhao, has responded to a deepfake clip of him speaking Mandarin fluently with a confession on X: “I couldn’t distinguish that voice from my real voice.”
“While CZ has always embraced emerging technologies, this moment seems to mark a shift,” says a post on the Binance forum. “Zhao did not specify who created it or what its purpose is, but his message is very clear: this technology is becoming extremely dangerous.”
Should deepfakes be illegal, period?
The same message is coming from multiple directions. There are words of warning from politicians like Schwalb, and Governor Michael S. Barr, who recently gave a talk on the deepfake threat and “AI arms race” at the Federal Reserve Bank of New York, arguing the financially pragmatic position that “we need to take steps to make attacks less likely by raising the cost of the attack to the cybercriminals and lowering the costs of defense to financial institutions and law enforcement.”
There are warnings from academics and scientists, from media and national security professionals, from governments and telecoms and banks.
Most significantly, there have been plenty of warnings from the very companies that create these AI technologies that they present a huge risk – even an existential one.
All of which raises the question: is there anything redeeming at all about deepfakes and the engines that generate them – and if not, why are they legal at all?
Article Topics
biometric liveness detection | biometrics | deepfake detection | deepfakes | fraud prevention | injection attacks | legislation | Persona | Reality Defender | synthetic faces | synthetic voice
Comments