Scale of deepfake hiring fraud necessitates real-time detection: Reality Defender

When fighting digital ghosts in 2025, the question is not, who you gonna call – but rather, who you gonna hire? Deepfake candidates and synthetic identities have flooded the online hiring pipeline, posing new threats to recruiters, HR teams, and enterprises. Once a fake employee is inside the system, they have access to sensitive information and the security settings they need to steal customer data, plant malware or even redirect salaries or funds overseas. In the last year, CrowdStrike revealed more than 320 incidents of remote job fraud by North Korean actors. Gartner predicts that by 2028, one in four job candidates globally could be fake.
So how do you make sure that your new star software engineer isn’t actually a front for North Korean espionage? According to new report from Reality Defender focused on deepfake hiring fraud, it starts with Zoom, the video conferencing software that caught on during pandemic lockdowns and has since become a common tool for hosting remote job interviews.
To prove its point, the deepfake-focused firm staged a fake Zoom interview with a fake candidate. Using “widely available tools,” the Reality Defender team whipped up Gary, who came prepared with extensive experience in cyber security and recently wrapped up a cloud infrastructure job at a large financial services firm. An affable, all-American looking chap with a dimpled chin and a healthy spackling of stubble, Gary passed every test of the human eye and ear.
But, “when scanned with Reality Defender inside Zoom, the platform flagged him as manipulated in seconds” – emphasizing the importance of real-time deepfake detection in live remote meetings.
“Humans have a very hard time distinguishing between what might have come from a generative model and what might not have,” says Reality Defender Senior Staff Scientist Jacob Seidman. The easy tells that once gave away AI avatars – extra fingers or limbs, say – have been more or less ironed out of many deep learning and generative AI models.
Reality Defender’s tools are designed to “look beyond what the eye or ear can perceive.” Working inside Zoom, models “analyze pixel-level traces in video and frequency patterns in audio to find signals invisible to humans.” The firm says training its algorithms on massive datasets of both authentic and generated media allows them to pick up the most subtle artifacts across multiple modalities.
“A secure Zoom bot streams video and audio to Reality Defender’s detection pipeline. Multiple models analyze each feed in parallel, trained to identify unique signals left by generative AI. The system ensembles these results into a confidence score, delivered instantly to the host dashboard.”
Reality Defender says the platform is built on a “microservices architecture” that allows it to process millions of chunks of audio and video data simultaneously. “That design means we can scale to thousands of calls without lag, and deploy new detection models quickly as generative AI evolves.”
For Reality Defender, the key takeaways are that detection is about forensic signals, rather than spotting “glitches.” The ability to keep pace and scale are also important, as is the ability to work across media, so that voice-only interviews are subject to the same rigor as video meetings, identifying and weeding out voice clones.
“Solely focused on deepfake detection,” says Reality Defender, “our platform works across video, audio and image, delivering results in seconds and integrating directly into tools like Zoom and Teams.”
Article Topics
AI fraud | biometrics | cybersecurity | deepfake detection | deepfakes | fraud prevention | generative AI | Reality Defender | synthetic data | Zoom







Comments