Kantara Initiative launches group devoted to deepfake injection attack threats
“It’s probably not as bad as this makes it seem,” says Andrew Hughes, VP of global standards for FaceTec and a member of the Kantara Initiative, at the beginning of his talk on how generative AI and deepfakes are challenging ID proofing and verification systems. But the arguments that follow say otherwise.
Hughes’ talk at EIC 2024 looks at how the Kantara Initiative is working to address the risk and answer critical questions facing traditional ID proofing and verification systems. “In a universe where everything is digital,” Hughes asks, “how are we going to onboard humans and not deepfake bots? How will we prevent impersonation and account takeover? If you roll out digital everything – wallets, credentials, services – without sufficient care and attention, all you’re going to do is increase fraud.”
Highlighting Biometric Update’s coverage of the UK welfare fraud case and the infamous $25 million Hong Kong deepfake meeting as evidence of the looming threat, Hughes introduces the Kantara Initiative’s Deepfake/AI Threats and ID Proofing and Verification discussion group, which aims to establish of work on deepfakes and AI ahead of the formation of a formal working group. Group members are researching how ID proofing systems work, what vulnerabilities they have and where they are heading in the future.
Hughes’s preferred definition of deepfake is “any believable media generated by a deep neural network.” He notes how generative AI has driven interest in the biometric identity verification and ID proofing market, by adding automation and scale to the mix. But he also says standard methods of identity verification are prone to fail as new techniques emerge.
“Injection attacks are where all ID verification providers are focused,” Hughes says. Virtual cameras add another layer of complexity to fraud prevention, and they are on the rise.
Hughes says biometric linking, liveness detection, new modalities and data acquisition techniques, and other systemic technology improvements offer partial solutions. But he also points a finger at international standards, which he believes to move much too slowly to properly address rapidly evolving threats. “While we’re addressing deepfake threats and identity verification, why don’t we fix the standards?” he asks.
This is one task among many that Kantara’s new group will address, in proposing a restructure of international ID verification standards such as ISO/IEC 24760 and ISO/IEC 29003, and developing their own certification program.
Article Topics
biometric liveness detection | biometrics | deepfake detection | deepfakes | EIC 2024 | FaceTec | generative AI | identity proofing | injection attacks | Kantara
Comments