Half of global businesses face deepfake attacks, Regula reports
Incidence of reported audio and video deepfake fraud have struck 49 percent of businesses in the past 12 months. According to the “The Deepfake Trends 2024” survey commissioned by Regula, video deepfakes increased by 20 percent, while audio deepfakes rose by 12 percent, in comparison to the previous report. Comments on audio deepfakes from a Pindrop executive only add to the concern.
The survey indicates that audio deepfakes are relatively common in sectors such as financial services (51 percent) and crypto (55 percent), while video deepfakes are more reported in law enforcement (56 percent) and FinTech (57 percent).
The report also emphasizes regional disparities, with UAE and Singapore experiencing higher susceptibility. In these regions, 56 percent of businesses reported AI-generated deepfake frauds. In contrast, deepfakes had the lowest impact on businesses in Mexico.
While audio and video deepfakes are a concern, traditional document forgery and manipulation are more prevalent than AI-generated scams. About 58 percent of businesses have encountered fraudulent activities involving modified documents, making it the most common form of identity fraud, the report says.
“The surge in deepfake incidents over the two-year period of our survey leaves businesses no choice but to adapt and rethink their current verification practices,” says Ihar Kliashchou, chief technology officer at Regula. “Deepfakes are becoming increasingly sophisticated, and traditional methods are no longer enough. What we think may work well is the liveness-centric approach, a robust procedure that involves checking the physical characteristics of both individuals and their documents; in other words, verifying biometrics and ID hardcopies in real-time interactions.”
The surge in the number of incidents Kliashchou refers to coincides with an increase in the quality of deepfakes that has reached a grim milestone, at least on the audio side.
Synthetic audio has crossed the uncanny valley, says Pindrop CPO
The chief product officer of Pindrop, Rahul Sood, said in a webinar that synthetic audio has crossed the “uncanny valley” to the point where it is unnoticeable from real and trustworthy voices.
In a recent news report, Senator Ben Cardin, the chair of the United States Foreign Relations Committee, fell victim to a deepfake attack in which they were impersonated as a top Ukrainian official. This highlights the challenge of determining between real and manipulated content in the digital landscape.
Developing audio/video deepfakes has become increasingly accessible with the availability of thousands of open-source models. According to Sarosh Shahbuddin, senior director for Product at Pindrop, the most convincing deepfakes, which are difficult to detect, can be generated using 10 minutes of speech.
The detection of deepfake attacks is further complicated by introducing background noise into the media, says Dr. Oren Etzioni, the founder of TrueMedia.org, a free tool for social media deepfake detection.
As the attack methods continue to evolve, the detection models must be regularly updated and enhanced with larger datasets and advanced machine learning algorithms, he continues.
Earlier this year, Pindrop announced the preview of its Pulse Inspect biometric deepfake detection tool, which claims to have a 99 percent accuracy in detecting AI-generated speech in digital audio files.
According to Pindrop, Pulse Inspect analyzed 21 million phone calls for liveness and found that 0.3 percent were non-live. Reported trends indicate that reconnaissance, account takeover, and fraud transactions are the three types of deepfake attacks that Pulse Inspect has detected in the calls.
Article Topics
deepfake detection | deepfakes | fraud prevention | generative AI | Pindrop | Regula | synthetic data
Comments