Deepfakes: Increasing fake news and identity theft in 2020

This is a guest post by Joseph Carson, Chief Security Scientist at Thycotic.
Earlier this year, a video was published on the web featuring some of the world’s most recognizable political leaders — Donald Trump, Theresa May, Justin Trudeau, Xi Jinping, Vladamir Putin and more — singing the famous 1970s hit, Imagine by John Lennon. The video pans several scenes showing various onlookers watching the various presidents, prime ministers and other heads of state performing the ballad on TV.
The video was the product of an artificial intelligence technology (which I prefer to call Machine Intelligence) used to analyze facial features and movements and recreate video imagery — this is known as Deepfakes. These creations from manipulated digital footprints stem from deep learning, an extension of machine learning, that uses algorithms to digest and learn from multiple data sets. With these data sets and layers of digital footprints, Deepfakes are able to mimic identities, such as a nation’s president. As expected, these fakes can be used for more than for entertainment purposes.
The spread of misinformation
In the past few years, “fake news” which is modern day propaganda has become a household term. Although the total effect of fake news is under continuous debate, there is little denying that it influenced the 2016 US Presidential Election. There is a major concern that the use of applied AI and machine learning with Deepfakes can be leveraged to create fake videos with dubbed audio of politicians stating false information or misleading rhetoric. Next thing you know, the fake video of a candidate making ludicrous comments is viral on social media, it’s being shared on major broadcast news channels with anchors and commentators up in arms, and all the news and political outlets are publishing story after story announcing and denouncing the false statements — all as a result of a fake video.
New Identity theft concerns
While the distribution of fake news is a widespread and a growing issue, the manipulation of personal audio, video and other digital footprints to create highly detailed fake message could have a massive effect on an individual level. As people share more and more of their personal data increasing their online digital footprint, this will continue to become a greater issue. In 2020, this falsifying technology will become increasingly problematic for those susceptible to identity theft — meaning almost everyone should be concerned that their digital footprint can be used to create deepfake content.
Many smart devices are using biometric data as an identifier for users. Rather than entering a pin or password, many users can just look at their iPhone and the device will unlock. Many assume this is a secure method of authentication because it will only unlock for the specific user’s face. But if facial features and other digital footprints can be manipulated with such accuracy to create various media, then cybercriminals can take advantage of this technology for malicious purposes. It is also important to remind you that biometrics are not secrets and they do not replace passwords, they simply replace your identifier which is usually your username or email address and excludes a password.
Historically, cybercriminals have wielded a variety of attack vectors to steal user identities, most often through social engineering methods involving phishing. The focus has largely been on email compromise and digital identities (name, addresses, social security numbers, etc). However, this is a whole new kind of engineering where attackers can create these deep fakes to create facades, maliciously using the identities of the victims to steal their digital voice and face profile. This takes identity theft to an entire new level.
It’s not just visual, many authenticators such as fingerprints and voice activation can be compromised. As more and more people post pictures, videos and personal details online, they are increasing the amount of data that cybercriminals can leverage. Everyone knows about fraud phone calls (“vishing”), which often use personal information of the target. Many victims report that the callers sounded like their impersonated family, friends or colleagues, leading to believe requests for money or personal information were legitimate and trustworthy. With the new digital identity theft, criminals can have even more tools at their disposal.
What to do
The best course of action is to be cautious and to distrust what you see online that does not includes context or origin. While the expansion of technology has made a whole new world of information available, unfortunately, not all that information is true. You should also be hesitant to trust any online request for money or personal details. Be careful what you post to Facebook and other sites and try to keep as much of your data as secure and protected as possible using encryption and password managers.
About the author
Joseph Carson is a cyber security professional with more than 20 years’ experience in enterprise security & infrastructure. Currently, Carson is the Head of Global Strategic Alliances at Thycotic.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.
Article Topics
artificial intelligence | biometric liveness detection | biometrics | cybersecurity | deepfakes | digital identity | facial recognition | identity verification | spoofing | Thycotic | voice authentication
An effective defensive technology that can substantially guard against deepfakes is AI-driven liveness detection. It can – and does now – discern the difference between a video (which is what a deepfake is) and a live, physically-present user. Dozens of vendors have employed certified (3rd-party-tested) liveness detection over the past 1-2 years with exceptional results. While the deployments do not currently represent a large percentage of use cases, it is quickly gaining favor. There will always be a need to manage our personal information as suggested here, but this is a fight that will be won with advanced technologies.