Deepfake fears motivate consumers to change behavior, but misconceptions linger

Consumer fears about deepfakes, data breaches and account takeovers are pushing security higher among priorities when selecting services and performing identity verification, according to a new report from Jumio.
The 2024 Online Identity Study is composed of responses from more than 8,000 consumers from the U.S., UK, Singapore and Mexico.
The increase in the number of people using and experiencing deepfakes is most initially surprising to Jumio CTO Stuart Wells, he tells Biometric Update in an interview. But then, he reflects, having found more than 100 easily accessible, sophisticated tools online to create deepfakes, maybe it should not be a surprise. “It took me all of ten minutes to get up and running and create a deepfake,” Wells says.
The number of respondents who believe they could spot a deepfake with their naked eyes increased over the past year, from 52 percent to 60 percent. This is despite nearly three-quarters of consumers (72 percent) saying they worry daily about being fooled by deepfakes.
Wells warns that with “exponential” improvement, “even the most highly trained people have difficulty” identifying deepfakes. Magnification will not help, he says. Tell-tale signs that used to be common, such as synchronization issues between voice and face movement, have improved dramatically.
“I think the rate and pace of quality deepfakes have made it extremely difficult, whether you’re a business or an individual,” Wells says. He points to Microsoft’s decision not to release its single-image generative AI product as an example of a company realizing the risk of these tools.
The widely-held misconception that generative AI’s products can be discerned with the naked eye may be a product of people’s recollections of the first time they saw a deepfake, Wells speculates, while worries about being fooled are fueled by news stories of successful fraud attacks that do not necessarily share the sophisticated, more current example of what it looks like.
The threat of deepfakes is often bound up with the ability of criminals to create content with an emotional impact, Wells notes. An employee receiving high-pressure instructions from a boss may not take time to check on the genuineness of the video call delivering that instruction.
Presentation attacks on automated remote identity proofing systems are still common, Wells says, but the same tools that can detect those attacks are not effective for defending against deepfakes.
The survey also shows that 19 percent of consumers consider creating a secure password to be the most accurate way to perform identity verification. While slightly more people favor a biometric comparison of an ID document and a live selfie, at 21 percent, a solid majority are somewhere in between favoring passwords and favoring biometrics as a verification method.
Wells says persistent education is the only way to change people’s perspectives and behaviors for the better. Reducing friction could also be part of the way to influence things for the better.
Many multi-factor authentication tools, for example, are even difficult for a professional like Wells to adopt.
More than 7 in 10 people are willing to spend a few extra seconds to make sure their identity verification in the latest survey, so the potential for change is there.
“If vendors think this through, they can provide ease of use, ease of registration, coupled with security without burdening the end-user with the complexity of all the myriad different types of options,” Wells says. “I think there’s a part to play for education of users in terms of what’s needed, because the study shows that, but by the same token, there’s a lot more work that the vendor could do to bridge that gap and make the adoption time frame much shorter by taking away the complexities and promoting the value of the solution.”
Fighting back with technology, regulation
Wells emphasizes the importance of multi-modal biometrics and multi-modal liveness as a way of increasing the complexity of fraud detection systems.
Layering multiple modalities, presentation attack detection (PAD) technology and device intelligence allows businesses to use “as much real-time information as possible” without increasing the friction experienced by the end user, Wells argues. The movement of the phone as a person takes a selfie yields significant information in the form of a “correlated signal” that is much more difficult to spoof, in combination with the selfie, than just the selfie.
These correlations are one of the subjects covered by Jumio’s patent portfolio, Wells notes.
Handling the complexity of architecting a solution from the various layers available to fit the needs of a particular business in a particular jurisdiction sometimes takes Jumio’s help, Wells says. Many businesses know what they need to do to maintain security and compliance, but cannot use a “one-size-fits-all” implementation across international borders.
While 60 percent of consumers was governments to regulate AI to address concerns about deepfakes, the degree of trust in their ability to do so varies widely between countries. Almost 7 in 10 Singaporeans believe their government can effectively regulate AI, compared to just 44 percent in Mexico, 31 percent in the U.S. and only 26 percent in the UK.
Wells sees potential differences in the ability of regulators, but warns of the importance of being realistic about what they can do to react to cutting-edge technology. He shares a brief anecdote about working for Netscape during the browser wars of the 1990s. Regulators were slow to grasp the business advantage of bundling a web browser with an operating system as a free bonus. In this case, businesses and individuals cannot wait for regulators to recognize and address the threat of tools that make deception easier.
Article Topics
biometric liveness detection | biometrics | deepfake detection | face biometrics | fraud prevention | identity verification | Jumio
Comments