FB pixel

Half of global businesses face deepfake attacks, Regula reports

Audio attacks improve in quality
Half of global businesses face deepfake attacks, Regula reports
 

Incidence of reported audio and video deepfake fraud have struck 49 percent of businesses in the past 12 months. According to the “The Deepfake Trends 2024” survey commissioned by Regula, video deepfakes increased by 20 percent, while audio deepfakes rose by 12 percent, in comparison to the previous report. Comments on audio deepfakes from a Pindrop executive only add to the concern.

The survey indicates that audio deepfakes are relatively common in sectors such as financial services (51 percent) and crypto (55 percent), while video deepfakes are more reported in law enforcement (56 percent) and FinTech (57 percent).

The report also emphasizes regional disparities, with UAE and Singapore experiencing higher susceptibility. In these regions, 56 percent of businesses reported AI-generated deepfake frauds. In contrast, deepfakes had the lowest impact on businesses in Mexico.

While audio and video deepfakes are a concern, traditional document forgery and manipulation are more prevalent than AI-generated scams. About 58 percent of businesses have encountered fraudulent activities involving modified documents, making it the most common form of identity fraud, the report says.

“The surge in deepfake incidents over the two-year period of our survey leaves businesses no choice but to adapt and rethink their current verification practices,” says Ihar Kliashchou, chief technology officer at Regula. “Deepfakes are becoming increasingly sophisticated, and traditional methods are no longer enough. What we think may work well is the liveness-centric approach, a robust procedure that involves checking the physical characteristics of both individuals and their documents; in other words, verifying biometrics and ID hardcopies in real-time interactions.”

The surge in the number of incidents Kliashchou refers to coincides with an increase in the quality of deepfakes that has reached a grim milestone, at least on the audio side.

Synthetic audio has crossed the uncanny valley, says Pindrop CPO

The chief product officer of Pindrop, Rahul Sood, said in a webinar that synthetic audio has crossed the “uncanny valley” to the point where it is unnoticeable from real and trustworthy voices.

In a recent news report, Senator Ben Cardin, the chair of the United States Foreign Relations Committee, fell victim to a deepfake attack in which they were impersonated as a top Ukrainian official. This highlights the challenge of determining between real and manipulated content in the digital landscape.

Developing audio/video deepfakes has become increasingly accessible with the availability of thousands of open-source models. According to Sarosh Shahbuddin, senior director for Product at Pindrop, the most convincing deepfakes, which are difficult to detect, can be generated using 10 minutes of speech.

The detection of deepfake attacks is further complicated by introducing background noise into the media, says Dr. Oren Etzioni, the founder of TrueMedia.org, a free tool for social media deepfake detection.

As the attack methods continue to evolve, the detection models must be regularly updated and enhanced with larger datasets and advanced machine learning algorithms, he continues.

Earlier this year, Pindrop announced the preview of its Pulse Inspect biometric deepfake detection tool, which claims to have a 99 percent accuracy in detecting AI-generated speech in digital audio files.

According to Pindrop, Pulse Inspect analyzed 21 million phone calls for liveness and found that 0.3 percent were non-live. Reported trends indicate that reconnaissance, account takeover, and fraud transactions are the three types of deepfake attacks that Pulse Inspect has detected in the calls.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Identity fraud revs up in the automotive sector as purchases move online

Like most industries, the automotive sector is dealing with a spike in fraud. A survey snapshot released by identity provider…

 

DHS RIVR results suggest most ID document validation disastrously ineffective

The results of the identity document validation track within the 2025 Remote Identity Validation Rally are sobering. They indicate that…

 

DHS signals major expansion of biometric matching infrastructure

The Department of Homeland Security (DHS) has issued a Request for Information (RFI) seeking industry input on biometric matching software…

 

ROC impresses in NIST biometric age estimation benchmark, Shufti makes debut

Two new entrants to NIST’s Face Analysis Technology Evaluation (FATE) Age Estimation & Verification, one a debut and the other…

 

Online dating at risk as romance scams, deepfakes infiltrate platforms

Online dating sites are being flooded with deepfakes and AI content, making it hard for users to distinguish real matches…

 

Police Scotland plans LFR business case, consultation on the way to a decision: SPA

Police Scotland has not yet made a final decision on implementing live facial recognition (LFR) and has instead announced its…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events