FB pixel

Pindrop report finds 90 voice fraud attacks occur every minute

Comes as deepfakes released of British PM, opposition leader endorsing one another
Pindrop report finds 90 voice fraud attacks occur every minute
 

Coinciding with the release Tuesday morning of deepfakes of UK Prime Minister Boris Johnson endorsing opposition leader Jeremy Corbyn for the upcoming election, Pindrop, a company that specializes in voice security and authentication, reported that voice fraud – the newest form of deepfake technology – is a growing “major threat, with rates climbing more than 350 percent from 2014 to 2018.”

The company’s annual 2019 Voice Intelligence Report disclosed having “uncover[ed] skyrocketing fraud rates, with 90 voice channel attacks occurring every minute in the U.S.

The report details what it said are “emerging fraud threats, the birth of the conversational economy, and why voice authenticated customer experience is the next revenue battleground for enterprises.”

The AI-generated videos of Johnson and Corbyn were created by Future Advocacy to warn that “deepfakes risk fueling misinformation, eroding trust, and compromising democracy,” and to “help to understand the ethical challenges of this kind of technology and the political responses that are needed” to address the threats they pose.

It marks the first time that deepfakes of political candidates have been injected into an ongoing election in the UK. Biometric Update earlier warned that U.S. authorities have become increasingly worried that such technologies will emerge at some point during the U.S. 2020 elections.

“Cybersecurity crimes are increasing each and every day as fraudsters and the technologies they use become more sophisticated,” said Pindrop CEO and co-founder Vijay Balasubramanian, who warned that, “(a)s we examine the biggest threats and trends impacting the insurance, financial, and retail sectors and prepare to battle emerging technologies, we urge enterprises to assess their fraud and authentication strategies to ensure they are prepared to safeguard their customers’ assets.”

The report disclosed that:

• In 2018, the fraud rate was 1 in 685, remaining at the top of a five-year peak;
• Insurance voice fraud has increased by 248 percent as fraudsters chase policies that exceed $500,000;
• In 2018, 446 million records were exposed from more than 1,200 data breaches; and
• The industries facing the highest fraud risks include insurance (1 in 7,500 fraudulent calls), retail (1 in 325 fraudulent calls), banking (1 in 755 fraudulent calls), card issuers (1 in 740 fraudulent calls), brokerages (1 in 1,742 fraudulent calls), and credit unions (1 in 1,339 fraudulent calls).

Pindrop emphasized “how synthetic voice attacks will soon become the next form of data breaches.”

“In the near future,” the firm said it expects “we will see fraudsters call into contact centers utilizing synthetic voices to test companies on whether or not they have the technology in place to detect them, particularly targeting the banking sector.”

Pindrop cautioned that “these attacks are dependent on deep learning and Generative Adversarial Networks (GANs), a deep neural net architecture comprised of two neural nets, pitting one against the other. GANs can learn to mimic any distribution of data—augmenting images with animation, or video with sound. These technologies use machine learning to generate audio from scratch, analyzing waveforms from a database of human speech and re-creating them at a rate of 24,000 samples per second. The end result includes voices with subtleties such as lip smacks and accents, making it easier for bad actors to commit breaches.”

Disturbingly, Pindrop stated, “(f)raud casualties will continue to rise as bad actors make fewer attempts but exploit companies for bigger losses through more sophisticated and targeted tactics.”

Pindrop analyzed more than 1 billion phone calls for some of the largest call centers in the US, including eight of the top 10 banks, five of the seven leading insurers, and three of the top five financial services companies.

And “through these findings, we have derived the data presented in this report,” the company said, adding, “for the purposes of this study, we are only analyzing the fraud identified in contact centers and all callers have been anonymized.”

Pindrop had previously revealed that voice fraud had risen more than 300 percent from 2013 through 2017, “with no signs of slowing down,” and that “between 2016 and 2017, overall voice channel fraud increased by 47 percent, or one in every 638 calls.”

The firm said at that time that “the year-over-year increase can be attributed to several causes, including the development of new voice technology, the steady uptick in significant data breaches, and acts of fraud across multiple channels.”

Balasubramaniyan said at the time that, “(t)he opportunity for voice to serve as a primary interface is becoming a reality due to integrations with IoT devices, the takeoff of voice assistants and more.”

And, “in turn,” he stated, “advanced voice technology is falling into the hands of bad actors and we’re seeing a dramatic spike in voice fraud.”

Pindrop said “the average fraudster’s toolbox is more advanced than ever, thanks to developments in machine learning and AI technology.” It said it found that these “fraudsters are increasingly leveraging techniques like imitation, replay attack, voice modification software and voice synthesis, often with great success,” and that “the increase year over year was most dramatic in the insurance industry, with a 36 percent increase, followed by banking, with a 20 percent increase.”

Today, Pindrop’s latest findings reveal the problem is growing exponentially, posing serious threats to the global economy and national security.

“Fraudsters … see economic opportunity within the conversational economy,” the company reported, noting that, “(w)ith $14 billion lost annually to fraud, companies will need to overcome challenges with voice such as securing it, allaying privacy concerns, and equipping call centers and customer service representatives with the right solutions to detect and prevent fraud.”

Future Advocacy said the “deepfakes depicting Boris Johnson and Jeremy Corbyn endorsing each other for Prime Minister” was performed as “a stunt to raise awareness on the dangers surrounding online disinformation,” and as a clarion call to “all political parties to work together to tackle the threats posed by deepfakes and other online disinformation tactics.”

Future Advocacy’s Areeq Chowdhury said in a statement that “deepfakes represent a genuine threat to democracy and society more widely. They can be used to fuel misinformation and totally undermine trust in audiovisual content.”

Furthermore, he said: “Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online. Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster, not the boardrooms of Silicon Valley.”

“By releasing these deepfakes,” he explained, “we aim to use shock and humor to inform the public and put pressure on our lawmakers. This issue should be put above party politics. We urge all politicians to work together to update our laws and protect society from the threat of deepfakes, fake news, and micro-targeted political adverts online.”

The group said it has identified four key challenges:

• Detecting deepfakes – whether society can create the means for detecting a deepfake directly at the point of upload or once it has become widely disseminated;
• Liar’s dividend – a phenomenon in which genuine footage of controversial content can be dismissed by the subject as a deepfake, despite it being true;
• Regulation – what should the limitations be with regards to the creation of deepfakes and can these be practically enforced; and
• Damage limitation – managing the impacts of deepfakes when regulation fails and the question of where responsibility should lie for damage limitation.

Bill Posters, a UK artist known for “creating subversive deepfakes of famous celebrities” like deepfake videos of Mark Zuckerberg, Kim Kardashian, and Donald Trump, said, “We’ve used the biometric data of famous UK politicians to raise awareness to the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power.”

Posters said, “It’s staggering that after three years, the recommendations of the” Digital, Culture, Media, and Sport Committee, one of the Select Committees of the British House of Commons, “enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy. As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We urge all political parties to come together and pass measures which safeguard future elections.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics