Deepfake risk can be mitigated but no silver bullet exists: Veriff
Veriff’s new Deepfakes Deep Dive Report is the latest to ring the alarm bell on the deepfake threat and the need to “cultivate a fraud-prevention ecosystem.”
A release from the identity verification provider says the report focuses on how deepfakes stand to impact the digital economy and on strategies for mitigating the risks.
“Deepfakes aren’t necessarily a new phenomenon. We’ve dealt with the threat for years,” says Rishi Chauhan, director of product for identity and fraud at Veriff. “However, the GenAI-driven ability to impersonate a user’s likeness creates serious security problems for online businesses.”
The problem is not that deepfakes are new, but that new tools and techniques are continually making them better, cheaper and easier to create. Or, as one of the report’s key takeaways says, “the AI threat is becoming more sophisticated.”
In a summary of the report published on Veriff’s blog, Director of Content Chris Hooper says generative AI can now group algorithms into artificial neural networks that “mimic the structure of the human brain” and can be trained to make “complex, non-linear correlations between large amounts of diverse and unstructured data.”
“As a result, deep learning models can learn independently from their environment and from past mistakes to perform tasks to an extremely high standard – including creating fake audio and video,” says Hooper. He notes estimates from DeepMedia that some 500,000 video and voice deepfakes were shared on social media sites globally in 2023.
Disorganized ID management, weak defenses open door to fraudsters
Enterprises with disjointed and inconsistent identity management processes and poor cybersecurity are more likely to be targets for fraudsters looking for an easy lunch. Synthetic identities and faked documents can be used to subvert biometric checks during Know Your Customer (KYC) onboarding processes, allowing fraudsters to open fake accounts. Existing accounts are also vulnerable to deepfakes and other digital social engineering techniques.
Veriff’s report identifies four common fraud techniques that fraudsters have developed with the help of AI and deep learning: face swaps, lip sync, puppets, and GANs and autoencoders. Face swaps, which superimpose a new face over someone’s photo, are increasingly common, particularly in pornographic content; Chauhan says they don’t “just leap forward every year, but every few months.” Lip sync and puppet attacks use algorithms to commandeer the physical movements of someone on video, while GANs and autoencoders can be used to generate synthetic identities based on large data sets.
“To stay ahead of fraudsters, companies need a constantly evolving, multi-layered approach that combines a range of threat mitigation tools. Unfortunately, no single solution exists,” says Chauhan.
Layered approach still safest bet for overall cybersecurity
It is likely to be a stack of security elements that makes for the strongest defense against deepfakes. Chauhan says a well-coordinated strategy could include comprehensive checks on identity documents, examination of key device attributes, treating data absence as a risk factor, using AI to identify falsified images, pattern detection, and biometric analysis of photos and video. “Whether you use IDV, document verification, or other tools, the more controls and data points you have, the harder it is for the fraudsters,” Chauhan says.
Article Topics
biometric liveness detection | biometrics | deepfake detection | deepfakes | face swap | fraud prevention | generative AI | Veriff
Comments