Biometric video injection attacks getting easier; ID R&D helps devs mitigate

Through the use of generative AI and open-source tools, hackers are gaining the ability to easily create deepfakes and voice clones, allowing them to mimic the appearance and voice of another person. The complexity of carrying out such fraudulent activities has significantly decreased, leading to a reduction in the cost and expertise required, as explained in an ID R&D report offering guidance to developers on mitigating video injection attacks.
While creating deepfake viral videos for social media is relatively straightforward from an attacker’s perspective, real-time video injection attacks demand more advanced technology and a sophisticated delivery mechanism.
In a recent incident, it was reported that the British engineering group Arup suffered a loss of approximately $25 million when scammers used AI-generated deepfakes to impersonate the group’s CFO during a video conference call.
These video injection attacks are becoming increasingly common, notably within KYC (know your customer) systems, where biometric data such as video frames of a person’s face are compared against an identity document.
During a recent Biometric Update webinar, ID R&D President Alexey Khitrov explained how freely accessible software can deceive individuals into believing someone is impersonating another person. A poll conducted among attendees revealed that the majority of organizations have either already encountered or anticipate encountering injection attacks and deepfakes in the near future.
How do video injection attacks work?
In video injection attacks, hackers manipulate or fabricate a digital video stream and insert it into a communication channel to deceive biometric verification systems or human operators. These attacks involve digital manipulation techniques such as 3D rendering, face morphing, face swaps, and deepfakes.
Video injection vectors are often used to trick remote facial recognition systems in scenarios like onboarding and KYC processes. These methods could be employed in various remote onboarding scenarios, including instances where individuals use smartphones, laptops, or PCs to open bank accounts.
These attacks can be carried out through various means, including exploiting vulnerabilities in hardware, software, network protocols, and client-server interactions, as well as manipulating virtual environments and external devices.
Some commonly used methods for video manipulation include virtual camera software (like ManyCam), hardware video sticks, JavaScript injection, smartphone emulators, and intercepting network traffic, ID R&D says in its guidance paper. There are more advanced techniques such as hardware injection, which require a higher level of expertise to implement.
However, organizations are recommended to comply with specific certifications and standards from the ISO/IEC 27000 family for KYC system development. Although these standards do not specifically address video injection attacks, they do contribute to the overall robustness of the infrastructure.
How do we prevent video injection attacks?
In its report “A Developers’ Guide Against Video Injection Attacks,” ID R&D indicates that while many KYC systems can identify standard presentation attacks, they have specific vulnerabilities that are not covered by current standards.
Common methods to counter video injection attacks include encryption and secure transmission of the video feed to prevent manipulation. Continuous authentication, such as biometric checks, can uphold the ongoing validity of the video feeds.
Many remote onboarding and KYC software are integrating AI-based anomaly detection and active liveness detection. Specifically, with liveness detection, the software analyzes users’ real-time movements to verify authenticity.
Additional strategies include digital watermarking to trace the original source of the video and multi-factor authentication to provide an extra layer of security.
The European Association for Biometrics (EAB) will release the rTS 18099 standard in October of this year. This standard aims to address the incorporation of biometric data between the data capture and signal processing components of a biometric system utilized for remote identity proofing.
Article Topics
biometrics | deepfake detection | deepfakes | face swap | ID R&D | injection attacks
Comments