Generative AI injection attacks best tackled at source, says new Yoti white paper
Yoti has released a new white paper on the threat of generative AI and the best ways organizations can defend themselves against deepfakes. Delving into the evolving threat landscape and the differences between presentation attacks and injection attacks, the biometrics and digital ID firm lays out how updates to its MyFace Secure Image Capture (SICAP) software can help address the problem.
Generative AI deepfakes can be used to perpetuate misinformation and disinformation, to deploy phishing attacks, to commit identity theft and financial fraud, or to fool authentication and verification systems to gain access to sensitive material. Privacy and consent are major issues, especially in the case of public figures or celebrities whose likenesses have been exploited, and dating or matching apps with age restrictions. Detection systems are racing to keep up.
“The rate of development of generative AI presents a problem to not just ensuring a person is who they say they are, but also to content platforms who need to be sure that the content added by a user is genuine,” says the paper. “This could be video sites, live streaming, dating profiles or social media platforms.”
Strategically, Yoti focuses on early detection and preventing spoofing and injection attacks at their source. Its software responds at the moment a bad actor attempts to enter a generative AI image or video into the verification process. This involves identifying which type of attack is at play in the image input – either a direct (presentation) attack, or an indirect (injection) attack.
Presentation attacks are when a fake biometric is presented to a device’s camera to deceive verification software. They can be attempted with physical masks, bots programmed to behave like humans, and images on a screen. For these attacks, which are relatively well understood, the best defense is liveness detection. Yoti’s proprietary liveness product, MyFace Live, offers passive liveness detection through selfie biometrics, in compliance with iBeta ISO PAD Level 2 testing. MyFace Live achieved a 100 percent attack detection rate during testing by NIST.
Injection attacks are newer, and more complicated. They can connect directly to a device to hijack a feed with fake video, or compromise the software with virtual cameras that simulate real webcams or by attacking vulnerabilities in the code. For these attacks, Yoti’s MyFace SICAP obfuscates the code at the point an image is taken and adds a cryptographic signature key, both of which are changed frequently for extra security. MyFace also blocks virtual camera software such as ManyCam, VCam and FakeWebCam.
Updates to MyFace in version 2 allow for more customization in data sharing on a per-case basis, improve analytics and support for integrators, and bring the total number of languages for which MyFace SICAP can be localized to 40.
Biometric verification a key weapon in arms race with generative AI
Yoti’s paper is both alarming – “combating generative AI has now become an arms race,” it says – and refreshingly blunt in acknowledging that all we can do is prepare as best as possible, and bring the best tools for the job.
“No-one really understands how generative AI will develop,” it says. “Once evading detection is added to a generative AI model as an objective during training, the situation will very quickly get a lot more difficult, particularly with images.” There are still hiccups in generative AI video that give it away, specifically in temporal inconsistency. But the thing about generative AI is that it will improve – both because it is still in its early stages, and because improving itself is one of the things it is designed to do.
As it improves, the points at which it presents a risk will increase in number and frequency. From dating sites to online gaming to social media and financial services, the opportunities for fraudulent behavior are myriad. Yoti’s paper points to issues of consent around so-called intimate images, and the exploitation of celebrity likenesses, as examples of growing problems that could be addressed with tools that provide accurate and secure age verification, ID authentication and liveness, watch lists for fake content of public figures, and further capabilities rooted in biometrics and digital identity.
“As the threat of generative AI in identity verification accelerates, we have developed a comprehensive strategy focused on early detection,” says Yoti CTO Paco Garcia. “We are committed to developing leading technology which is at the forefront of combating evolving threats in the generative AI landscape. The combination of our liveness technology and SICAP solution gives businesses enhanced security and defense against fraudulent attempts and deepfakes.”
Article Topics
biometrics | deepfake detection | deepfakes | injection attacks | presentation attack detection
Comments