Reality Defender dials in on voice deepfake fraud in banking

As deepfake technology evolves, the variety and sophistication of phishing attacks continues to increase. Organizations may wonder how to protect against deepfake attacks that can lead to significant financial losses and reputational damage.
Reality Defender is committed to tackling the deepfake problem, and has two free resources that showcase how.
A downloadable case study from the New York deepfake detection firm breaks down how one of the world’s largest financial institutions uses Reality Defender’s tools to mitigate contact center deepfake threats, and protect its high-net-worth clients from AI-driven fraud.
And an on-demand webinar looks at shared real-world examples of deepfake fraud in call centers, and examines why traditional security measures are no longer sufficient to combat deepfake voice phishing (vishing).
“Recent figures indicate a 60 percent rise in AI-driven phishing attacks, reflecting the fast adoption of generative AI by cybercriminals,” writes Reality Defender VP of Human Engagement Gabe Regan in a recent blog on deepfake detection. “Traditionally, phishing scams relied on email or text messages designed to deceive victims into revealing sensitive information. In the new age of deepfakes, attackers are using AI-generated voice manipulations to take phishing tactics to a new level.”
Familiar voices allow deepfakes to prey on innate trust of employees
At this point, most everyone in tech knows the story of the woebegone Arup employee in Hong Kong who transferred $25 million during a video call at the request of an executive who turned out to be a deepfake.
Calls like this are routine in the financial sector. In many ways, the telephone is still a primary tool for conducting business in the industry. That puts it at risk, Regan says.
“The financial services industry is a prime target for vishing and deepfake scams due to its heavy reliance on verbal communication for critical transactions.”
He notes an alarming 393 percent increase in phishing attacks over the past year in the finance and insurance sectors. “Deepfakes are the second most frequent cybersecurity incident experienced by businesses in the last 12 months, and experts predict that U.S. industries will sustain $40 billion in losses to deepfake fraud by 2027.”
While the proliferation of instant messaging and scam calls have made people wary of the telephone, in most workplace scenarios, people have an inherent trust in familiar voices. Compliments can be weaponized for phishing, too: a recent Global Threat Intelligence Report from Blackberry notes how “flattery or urgency in unexpected professional networking requests” can be used as a phishing lure.
Four ways to shore up defenses against deepfake fraud
Regan lists four proactive measures that organizations can take to protect themselves from deepfake attacks across the spectrum.
Advanced detection technologies are key. Forensic analysis and real-time monitoring can identify subtle clues and vocal anomalies in speech that traditional KYC verification systems aren’t equipped to pick up on.
Enhanced biometric authentication protocols and multi-factor authentication (MFA) for high-value transactions adds extra layers of security.
Employees must know about the threat, making training and awareness crucial. And they must be able to respond, which requires incident response planning that lays out clear protocols in the case of deepfake-enabled attacks.
Regan says that “by combining advanced AI detection tools, robust authentication protocols, and comprehensive employee training, organizations can build a stronger defense against these sophisticated attacks.”
Digital ID wallets offer another aspect of deepfake fraud prevention
Another tool bringing the fight to deepfakes is digital identity wallets. According to a recent report from the World Economic Forum, “Digital identity wallets address multiple vulnerabilities to fraud, including those associated with AI.”
“They can incorporate advanced technologies such as 1:1 facial verification, duplicate face check and liveness detection, making them far more resistant to deepfake-driven impersonation attempts and scaled attacks,” the WEC says.
It notes that seven states credited ID.me, the largest digital I.D. wallet in the U.S., with helping to prevent over $270 billion in fraud during the pandemic.
“Digital identity wallets that leverage good biometrics to combat deepfakes and scams offer a secure and scalable way for organizations to verify that the individuals seeking access to their benefits and services online are who they say they are.”
Article Topics
biometric authentication | biometrics | deepfake detection | deepfakes | fraud prevention | generative AI | KYC | Reality Defender | synthetic data | voice biometrics
Comments