Here’s the newest camera hack fraudsters are using to beat facial recognition
By Stuart Wells, Chief Technology Officer at Jumio
As organizations have tried to stay ahead of cybercriminals, faces and other biometrics have become the passwords for so many everyday functions. From unlocking a phone, to accessing a bank account, to setting up a doctor’s appointment, important tasks are completed by verifying that the correct face is the one behind (or in front of) the camera.
However, like with traditional passwords, fraudsters have gotten creative about getting around facial recognition security. By now, most everyone understands deepfakes and the threats associated with them. Since, by definition, a deepfake is an altered or fabricated video sequence, cybercriminals depend on a technique known as “camera injection” to beat facial recognition systems.
Some experts have predicted that as much as 90 percent of content on the web could be synthetically generated by 2026, making it increasingly difficult for organizations to discern which users are who they claim to be. Here is a rundown on how fraudsters are pulling off camera injection attacks, what makes it so dangerous and how organizations can protect themselves.
How do camera injection attacks work?
As passwords have evolved beyond numerical and alphabetical characters, hackers have been pushed to adapt beyond the likes of brute force attacks and credential stuffing. Beating facial recognition software requires a far greater level of sophistication, which has led to the invention of tactics designed to trick biometric and liveness detection tools.
Fraudsters introduce deepfake videos into the system using a camera injection attack. Camera injection occurs when a fraudster bypasses the charged-coupled device (CCD) of a camera to inject pre-recorded content, a real-time face swap video stream or completely fabricated (deepfake) content. This pre-recorded content could be an unaltered video of a real person that a bad actor is attempting to defraud. The pre-recorded or real-time generated video could be a clip where the face is altered in some way, or of a completely synthetic face that does not exist.
The bypass of a live feed that a real camera’s CCD would normally capture is accomplished through a couple of methods. One is by hacking the device driver of a real camera and injecting the video stream into a lower level of the device driver. The more common means of camera injection is to have a virtual camera device driver that simply feeds a pre-recorded or real-time generated video stream to the system, presenting it as a real camera feed.
Since a video is a series of still images, a fraudster will sometimes feed the same image into every frame of a video stream. The result is a video stream where there is no motion. A more sophisticated technique, but also more time-intensive for the fraudsters, is altering or fabricating a video sequence where motion is present. The most sophisticated technique is where a deepfake can be manipulated in real time to perform actions asked for by the integrated data viewer system.
What is the threat?
Once an attacker has successfully passed through this stage of verification, they will have access to an account that is not theirs, leaving them free to wreak havoc under a synthetic or stolen identity. From there, malicious actors can register for phony accounts, complete fraudulent transactions and more.
The primary concern with the camera injection technique is that, if done successfully, organizations will not realize they have been beaten. If the facial recognition technology in place believes it has properly verified a user’s identity when it has actually been fooled by camera injection, fraudsters can essentially sneak in undetected.
Only when an account conducts some kind of suspicious behavior, like an unusual bank transaction, would an organization determine that they may have fallen victim to this kind of attack. In many cases, by the time an organization detects the threat, the damage to a user’s account has already been done.
Can camera injection attacks be prevented?
While fraudsters’ tactics continue to evolve, so do the mechanisms designed to keep them out. Robust identity verification with sophisticated liveness detection tools can protect organizations from fraudsters employing the camera injection technique.
To defend against this type of tactic, organizations can establish controls to detect when a camera device driver has been compromised, when a virtual camera is being used and/or when forensic evaluation of a video stream reveals manipulation or fabrication.
Comparing natural motion to the motions in the captured video can help reveal manipulation. Elements like eye motion, expression changes or regular blinking patterns occur naturally. If no such motion is detected, there is a high chance that a single image is being replayed to create a video sequence.
The capture process can also inject artifacts that should alter the captured images in ways that are detectable. Some of these could be changing the camera parameters (like ISO, aperture, frame rate, resolution, etc.) and observing whether the expected changes occur in the capture. Another could be changing the color or illumination intensity of the device’s screen and looking for a corresponding reflection from the face being captured.
By relying on the accelerometer within the device used to take a verification selfie, and comparing it to the changes in the objects (e.g., face) captured during the video, organizations can determine whether a camera has been compromised by a potential hacker. The individual frames of the video can be forensically analyzed for signs of tampering, such as double compressed parts of the image, or for artifacts indicating a computer-generated (deepfake) image.
Living without fear of fraudsters
Facial recognition tools are meant to supply an added level of security for organizations, and the emergence of the camera injection technique has been a legitimate threat to that extra layer of protection.
A recent Jumio survey revealed that 52 percent of respondents believe they can accurately detect a deepfake video, but the reality is that synthetic content is growing more sophisticated and harder to decipher. In a recent incident in China, an AI-powered video impersonator assumed the identity of the victim’s friend and scammed them out of more than $600,000.
As prevalent as the threat of synthetic content may be, sophisticated liveness detection during the identity verification process enables businesses to stay ahead of hackers attempting to use techniques like camera injection. With these resources at their disposal, organizations can feel confident that malicious actors are being kept at bay while ensuring legitimate business users can still gain entry to their accounts.
About the author
As Chief Technology Officer at Jumio, Stuart is responsible for all aspects of Jumio’s innovation, machine learning and engineering. An industry veteran with more than 30 years of tech experience, Stuart previously was the chief product and technology officer at FICO and held executive positions at Avaya and Sun Microsystems.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.