FB pixel

Half of global businesses face deepfake attacks, Regula reports

Audio attacks improve in quality
Half of global businesses face deepfake attacks, Regula reports
 

Incidence of reported audio and video deepfake fraud have struck 49 percent of businesses in the past 12 months. According to the “The Deepfake Trends 2024” survey commissioned by Regula, video deepfakes increased by 20 percent, while audio deepfakes rose by 12 percent, in comparison to the previous report. Comments on audio deepfakes from a Pindrop executive only add to the concern.

The survey indicates that audio deepfakes are relatively common in sectors such as financial services (51 percent) and crypto (55 percent), while video deepfakes are more reported in law enforcement (56 percent) and FinTech (57 percent).

The report also emphasizes regional disparities, with UAE and Singapore experiencing higher susceptibility. In these regions, 56 percent of businesses reported AI-generated deepfake frauds. In contrast, deepfakes had the lowest impact on businesses in Mexico.

While audio and video deepfakes are a concern, traditional document forgery and manipulation are more prevalent than AI-generated scams. About 58 percent of businesses have encountered fraudulent activities involving modified documents, making it the most common form of identity fraud, the report says.

“The surge in deepfake incidents over the two-year period of our survey leaves businesses no choice but to adapt and rethink their current verification practices,” says Ihar Kliashchou, chief technology officer at Regula. “Deepfakes are becoming increasingly sophisticated, and traditional methods are no longer enough. What we think may work well is the liveness-centric approach, a robust procedure that involves checking the physical characteristics of both individuals and their documents; in other words, verifying biometrics and ID hardcopies in real-time interactions.”

The surge in the number of incidents Kliashchou refers to coincides with an increase in the quality of deepfakes that has reached a grim milestone, at least on the audio side.

Synthetic audio has crossed the uncanny valley, says Pindrop CPO

The chief product officer of Pindrop, Rahul Sood, said in a webinar that synthetic audio has crossed the “uncanny valley” to the point where it is unnoticeable from real and trustworthy voices.

In a recent news report, Senator Ben Cardin, the chair of the United States Foreign Relations Committee, fell victim to a deepfake attack in which they were impersonated as a top Ukrainian official. This highlights the challenge of determining between real and manipulated content in the digital landscape.

Developing audio/video deepfakes has become increasingly accessible with the availability of thousands of open-source models. According to Sarosh Shahbuddin, senior director for Product at Pindrop, the most convincing deepfakes, which are difficult to detect, can be generated using 10 minutes of speech.

The detection of deepfake attacks is further complicated by introducing background noise into the media, says Dr. Oren Etzioni, the founder of TrueMedia.org, a free tool for social media deepfake detection.

As the attack methods continue to evolve, the detection models must be regularly updated and enhanced with larger datasets and advanced machine learning algorithms, he continues.

Earlier this year, Pindrop announced the preview of its Pulse Inspect biometric deepfake detection tool, which claims to have a 99 percent accuracy in detecting AI-generated speech in digital audio files.

According to Pindrop, Pulse Inspect analyzed 21 million phone calls for liveness and found that 0.3 percent were non-live. Reported trends indicate that reconnaissance, account takeover, and fraud transactions are the three types of deepfake attacks that Pulse Inspect has detected in the calls.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

World Economic Forum looks to get a GRIP on global regulatory environment

A new piece written by the World Economic Forum (WEF)’s head of digital inclusion, Kelly Ommundsen, looks at the gap…

 

Respected legal authority frames age assurance legislation as segregation, suppression

Language plays a fundamental role in how concepts and technologies are introduced into and evolve alongside society. The debate over…

 

New high scores in fingerprint biometrics accuracy for Dermalog, ROC, Innovatrics

New algorithms submitted to the U.S. National Institute of Standards and Technology for its Proprietary Fingerprint Template (PFT) Evaluation have…

 

Australia’s safety code for search tools takes effect, with age verification rules

Like its counterparts in the EU and UK, Australia’s digital regulator is beginning to formalize its online safety codes. The…

 

Age verification coming to major video game mod site in EU, UK

Want to make sweet love to that frost giant who lives in the fjord? You may have to prove your…

 

Ecuador upgrades border ID verification with Regula forensic devices

Ecuador is upgrading its border identity verification systems by deploying a range of Regula’s ID document examination devices. The deployment…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events