Harmful application of deepfakes ‘growing rapidly online,’ new report warns
As California Gov. Gavin Newsom signed legislation banning distribution of deepfake videos and pictures of political candidates within 60 days of an election, Deeptrace — an Amsterdam-based cybersecurity firm that provides deep learning and computer vision technologies for the detection and online monitoring of synthetic media – was preparing to issue a report on the problem.
According to Deeptrace, its “research revealed that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to 14,678.” The company said the “increase is supported by the growing commodification of tools and services that lower the barrier for non-experts to create deepfakes,” and that, “perhaps unsurprisingly, we observed a significant contribution to the creation and use of synthetic media tools from web users in China and South Korea, despite the totality of our sources coming from the English-speaking Internet.”
Deeptrace Founder, CEO, and Chief Scientist Giorgio Patrini, said in a statement that, “Deepfakes are here to stay, and their impact is already being felt on a global scale. We hope this report stimulates further discussion on the topic, and emphasizes the importance of developing a range of countermeasures to protect individuals and organizations from the harmful applications of deepfakes.”
The company’s report, The State of Deepfakes: Landscape, Threats, and Impact, noted that academic papers mentioning Generative Adversarial Network (GAN) in their title or abstract published per year on arXiv — an online archive of scientific and mathematical research papers maintained by Cornell University – is “an indirect indication of how fast GAN quality is improving and the rate that new capabilities for real-world applications are being developed … it also indicates how this research is being incorporated into technology accessible to a wider audience outside academia.”
Indeed. Academic papers jumped from just 3 in 2014, to more than 1,200 this year.
“Based on the rate of AI progress, we can expect deepfakes to become better, cheaper, and easier to make over a relatively short period of time. Governments should invest in developing technology assessment and measurement capabilities to help them keep pace with broader AI development, and to help them better prepare for the impacts of technologies like this,” Jack Clark, Policy Director of OpenAI, was quoted saying by the report.
Deeptrace’s study found that the total number of deepfake videos online “is rapidly increasing, with this measurement representing an almost 100 percent increase based on our previous
measurement (7,964) taken in December 2018.”
Disturbingly, the report stated, “Deepfakes are also making a significant impact on the political sphere,” pointing to “two landmark cases from Gabon and Malaysia that received minimal Western media coverage” in which deepfakes were “linked to an alleged government cover-up and a political smear campaign. One of these cases was related to an attempted military coup, while the other continues to threaten a high profile politician with imprisonment.”
“Seen together, these examples are possibly the most powerful indications of how deepfakes are already destabilizing political processes,” the report cautioned, stressing that “without defensive countermeasures, the integrity of democracies around the world are at risk.”
This threat isn’t lost on US politicians. The legislation just signed by California’s governor was expressly written in response to the manipulated “shallowfake” video of Democratic Speaker of the House Nancy Pelosi that had been posted on Twitter late last May in which her speech had been slowed down to make it appear she’d drunkenly slurred her words. Not surprisingly, the video went viral, even being retweeted by the official Twitter account of President Trump and his personal attorney. By July 31, the tweet had received more than 6.3 million views. And on a popular Facebook page, it was viewed more than 2.2 million times in the first 48 hours of it having been uploaded.
But “outside of politics, the weaponization of deepfakes and synthetic media is [also] influencing the cybersecurity landscape, enhancing traditional cyber threats and enabling entirely new attack vectors,” the Deeptrace report stated. “Notably, 2019 saw reports of cases where synthetic voice audio and images of non-existent, synthetic people were used to enhance social engineering against businesses and governments.”
Deeptrace pointed out what US intelligence officials have previously told Biometric Update, which is bogus digital identities can be used to engage in fraud, infiltration, and even espionage.
Deeptrace said it “observed two cases where realistic synthetic photos of non-existent people were used on fake social media profiles in an attempt to deceive other users and extract information,” and that there “have been several reported cases where synthetic voice audio has allegedly been used to defraud companies. While no concrete evidence has been provided to support claims that the audio was synthetic, the cases illustrate how synthetic voice cloning could be used to enhance existing fraud practices against businesses and individuals.”
Perhaps the most glaring example of this was in September when it was reported that the CEO of an unnamed UK-based energy company was conned into transferring US$243,000 to the bank account of a Hungarian supplier by a voice on his phone who he believed was that of the CEO of the firm’s German parent company. Reportedly, the fake voice on the phone had the German CEO’s accent and “melody.”
“Deepfake creation communities and forums are a key driving force behind the increasing accessibility of deepfakes and deepfake creation software,” the Deeptrace report warned, noting that “many of these creation communities and forums provide an entry point for people interested in creating deepfakes, and facilitate collaboration between more experienced creators.”
The study identified 20 deepfake creation community websites and forums with more than 97,000 non-unique members from sources that disclosed membership numbers.
“Deepfakes pose a range of threats, many of which are no longer theoretical,” the report said, concluding that:
• Deepfake creation technologies and tools are being commodified through a growing number of communities, computer apps, and services;
• The online presence of deepfake videos is rapidly expanding, with the vast majority of these videos involving pornographic content;
• Deepfake pornography is a global phenomenon supported by a significant viewership across several dedicated websites, with women exclusively being targeted;
• Awareness of deepfakes alone is destabilizing political processes by undermining the perceived objectivity of videos featuring politicians and public figures; and
• Deepfakes are providing cybercriminals with new sophisticated capabilities to enhance social engineering and fraud.
“These conclusions are drawn from our analysis of the deepfake landscape as it currently lies,” the report stated, emphasizing, however, that “the speed of the developments surrounding deepfakes means this landscape is constantly shifting, with rapidly materializing threats resulting in increased scale and impact. It is essential that we are prepared to face these new challenges. Now is the time to act.”
authentication | biometrics | cybersecurity | deepfakes | facial recognition | identity verification | voice biometrics