Sensity alleges biometric onboarding providers downplaying deepfake threat
Deepfake videos have received breathless attention in popular for their potential as fraud instruments, from spreading misinformation to carrying out attacks against online access control systems and theft.
A new report from biometric KYC vendor Sensity suggests that deepfakes are a much stronger threat to customer onboarding systems based on selfie biometrics than is appreciated by those providing the technology.
The company finds significant vulnerabilities to deepfake spoofing among biometric KYC providers which make up almost a quarter of global market share. The firm finds there is insufficient technical literature on the vulnerability of liveness detection products to deepfake and models used in academic testing are insufficiently rigorous.
For the report ‘Deepfakes vs Biometric KYC Verification’ Sensity carried out deepfake spoofing attacks on “ten of the most widely adopted biometric verification vendors for KYC,” which remain unnamed, and found that the “the vast majority were severely vulnerable to deepfake attacks.”
Sensity developed what it calls the industry-first Deepfake Offensive Toolkit (DOT) which it uses to improve its own services. Turned on competitors’ products, the spoofing system fooled all five active liveness tests, all five ID verification tests, four out of five passive liveness tests, and all four full KYC systems evaluated.
The finding is notable in light of research from BioID, which has found that digital artefacts revealing the manipulation of video can be detected with AI algorithms, suggesting a dedicated algorithm could be layered into a presentation attack detection (PAD) system for protection from deepfakes. BioID’s Ann-Kathrin Freiberg did, however, warn against application level attacks such as with virtual cameras.
The report from Sensity refers to injecting the deepfake video into the biometric system.
Unite.AI suggested in a recent paper that biometrics can detect deepfakes even more easily than artefact detection systems, and numerous liveness and PAD vendors claim the ability to detect deepfakes.
“We have promptly disclosed the vulnerability to all the interested KYC vendors,” explains Sensity CEO and Chief Scientist Giorgio Patrini in an email to Biometric Update. “The disclosure happened several months prior to the release of our report. To our surprise, our results were downplayed or sometimes entirely dismissed by these companies. The common argument was not that the vulnerability does not exist in their product, but instead that their clients would not be interested in seeing it fixed. This is worrying since in the last year we have recorded particularly fast growth in the use of deepfakes and algorithmic avatars for automating spoofing to liveness in KYC, particularly with Sensity’s clients in Latin America.”
Sensity also refers to a 2021 fraud attack in China which appears to have used deepfakes to beat a PAD system.
The report conclusions leave open questions about how common the attack method is, and how easy it would be to scale the use of sophisticated real-time deepfakes with camera hijacking.
An incident in which deepfake technology was used to impersonate an American consumer, rather than a celebrity or political figure, but in this case West Virginia’s WSAZ reports the deepfake was used to pitch the individual’s contacts on cryptocurrency investment, not attack biometric systems.
Frank Hersey contributed to this report.
Article Topics
BioID | biometric liveness detection | biometrics | deepfakes | face biometrics | fraud prevention | KYC | onboarding | presentation attack detection | selfie biometrics | Sensity | spoofing
Comments