Disconnect between deepfake attacks and defenses revealed by iProov survey
Deepfakes are an imminent fraud threat, but most organizations are still not taking them seriously enough, and many are confused about how biometrics can help, according to the latest report from iProov.
“The Good, the Bad, and the Ugly: A Global Study on the threat of AI and deepfakes” compiles the observations of 500 enterprise technology decision-makers.
It shows “the extent to which deepfakes are starting to hit society,” says Ajay Amlani, iProov’s President and head of the Americas in an interview with Biometric Update. “It’s been something that historically people have said is coming, but now it’s clear that it’s actually here.”
Even cybersecurity firms have hired people based on deepfake videos, he notes.
The stakes are high, as the risk is that the threat of fraud could push society back into requiring in-person identity verification. But the days of booking appointments at a government office building, arranging child care and so on, are supposed to be over.
The split is nearly even between those organizations that have encountered at least one deepfake (47 percent) and have not (50 percent). A large majority (73 percent) are actively implementing cybersecurity solutions to defend against deepfakes, according to the survey.
There is some regional variation, with north American businesses less likely to have encountered a deepfake, but those in Latin America less likely to expect significant impact on their organization, whether from deepfakes themselves or from legal and regulatory penalties.
Suddenly here, or suddenly seen?
Generative AI is becoming more familiar to people in the form of beautification features or filters that allow people to swap their gender or ethnicity in real-time. “People are feeling like this is coming on so fast, but TikTok has had its filters capability for the last year,” Amlani says.
What’s more surprising, he says, is the quality the fakes have reached.
One of the results, Amlani explains, is a split between relatively benign uses of the technology, which could include some anonymization, for instance in loyalty programs, and an expansion of fraud vectors.
Amlani has a well-established track record of advocating for biometrics to defend against fraud, while improving user experiences, that he traces back to being sent by Homeland Security Secretary Tom Ridge to observe how well the U.S. Visit program worked, some 20 years ago.
The survey shows biometrics are valued as a tool for protecting against deepfakes by 75 percent, more than any other technology. Multifactor authentication was next at 69 percent, followed by device-based biometrics (67 percent) and deepfake detection algorithms, which were selected by only 47 percent.
“The time is now to really focus on deploying proofs of concept to make sure that you have tested solutions on the market, you’ve integrated solutions in the market,” Amlani says. “Because when the day comes that you’re getting hit at scale, you want to have a vendor choice on hand already.”
Businesses that are yet to implement biometrics do not have the same urgent need, Amlani says. But even then, if they are doing video calls, they can protect those calls while testing out technology that will only become more important as time goes on.
The cost of deploying a system for testing is minimal, compared to the cost of fraud, Amlani says, often in the $20,000 to $50,000 range.
The professionalization of AI fraud behooves organizations to prepare proactively to fend off deepfake attacks.
Deepfake fraud stories have been publicized, and Amlani applauds those organizations that have admitted being victimized. Most attacks are kept quiet, he says, and even disguised in company records, sometimes as bug bounty payments.
Meanwhile, bad actors are reinvesting the proceeds of their attacks to fuel more and stronger fraud.
Amlani acknowledges a disconnect within the survey findings, as 7 in 10 expect deepfakes made with generative AI will have a “moderate” or “major” impact on their organization. More than 4 in 10 “somewhat agree” that their organization is not taking the threat seriously enough, however, and a further 19 percent “strongly agree.” He chalks this up to an awareness gap, with the reticence of companies to admit they’ve been hacked a contributing factor, but also the lag in biometrics adoption. Traditional attacks utilizing breached personal information are still sufficient to steal from many organizations.
This is particularly the case, Amlani says, in the United States, where biometrics adoption is still hindered by concerns, with varying degrees of legitimacy, about biometrics regulation, privacy and bias.
Naïveté about native device biometrics
The preference for device-based biometrics over deepfake detection algorithms reflected in the survey is a function, at least in part, of a split between cybersecurity and fraud teams.
“Cameras themselves are actually where the vulnerability is,” Amlani says. “The cameras on devices were never really made to be locked down. They were meant to be emulated,” so that for instance consumers can use third-party webcams.
Many fraud teams are still thinking about attacks and liveness in terms of biometric presentation attacks, which Amlani calls “passe.”
“People do not make, at scale, Mission Impossible-style masks to be able to trick systems today. They’re not spending time and using AI to create a deepfake to print it out on an eight and a half by eleven sheet.”
Digital injection attacks are the method of delivery. This is a cybersecurity attack, rather than a biometric attack, as such, Amlani explains. The inadequacy of native device biometrics to defend against deepfakes is therefore not fully understood by many organizations.
Device manufacturers like Apple and Google do understand the issue, Amlani says. This is why Apple’s enrollment flow for mobile driver’s licenses is different, and harder to pass, than Face ID. Businesses using Face ID for biometric identity verification are taking the risk on themselves, but Apple is responsible if someone enrolls a fraudulent mDL, so the process includes several active liveness detection steps, with users looking to either side and raising their eyebrows.
iProov’s Flashmark passive liveness serves the same purpose, Amlani says, by verifying the presence of live human tissue in front of the camera. The company also applies device signals and looks for signs of manipulation like pixel changes and blurring, “but that can only take you so far.”
Amlani also points out that with awareness of deepfake fraud still rising, 47 percent wanting dedicated detection algorithms can be taken as a positive sign, and a number likely to increase in the months ahead.
Perhaps the most worrying finding from the survey is about what biometric modalities are safe from deepfakes.
More than 4 out of 5 professionals say that fingerprint biometrics are effective at combatting deepfakes, followed by iris (68 percent), face (67 percent) and advanced behavioral biometrics (65 percent). While it may be encouraging that voice came last, at 48 percent, as Amlani points out, remote fingerprinting is generally carried out with the rear-facing camera of a smartphone, which cannot perform the flashes of color iProov uses to detect deepfakes. For this reason, iProov is working on unattended fingerprint capture, “but it takes some time,” Amlani says.
In the meantime, do not use remote fingerprints to defend against deepfakes, despite an apparently widespread misconception.
Article Topics
biometric liveness detection | biometrics | cybersecurity | deepfake detection | generative AI | injection attacks | iProov | passive facial liveness
Comments