FB pixel

Scale of deepfake hiring fraud necessitates real-time detection: Reality Defender

Gary might look good on paper and on screen, but he’s a spy for Pyongyang
Scale of deepfake hiring fraud necessitates real-time detection: Reality Defender
 

When fighting digital ghosts in 2025, the question is not, who you gonna call – but rather, who you gonna hire? Deepfake candidates and synthetic identities have flooded the online hiring pipeline, posing new threats to recruiters, HR teams, and enterprises. Once a fake employee is inside the system, they have access to sensitive information and the security settings they need to steal customer data, plant malware or even redirect salaries or funds overseas. In the last year, CrowdStrike revealed more than 320 incidents of remote job fraud by North Korean actors. Gartner predicts that by 2028, one in four job candidates globally could be fake.

So how do you make sure that your new star software engineer isn’t actually a front for North Korean espionage? According to new report from Reality Defender focused on deepfake hiring fraud, it starts with Zoom, the video conferencing software that caught on during pandemic lockdowns and has since become a common tool for hosting remote job interviews.

To prove its point, the deepfake-focused firm staged a fake Zoom interview with a fake candidate. Using “widely available tools,” the Reality Defender team whipped up Gary, who came prepared with extensive experience in cyber security and recently wrapped up a cloud infrastructure job at a large financial services firm. An affable, all-American looking chap with a dimpled chin and a healthy spackling of stubble, Gary passed every test of the human eye and ear.

But, “when scanned with Reality Defender inside Zoom, the platform flagged him as manipulated in seconds” – emphasizing the importance of real-time deepfake detection in live remote meetings.

“Humans have a very hard time distinguishing between what might have come from a generative model and what might not have,” says Reality Defender Senior Staff Scientist Jacob Seidman. The easy tells that once gave away AI avatars – extra fingers or limbs, say – have been more or less ironed out of many deep learning and generative AI models.

Reality Defender’s tools are designed to “look beyond what the eye or ear can perceive.” Working inside Zoom, models “analyze pixel-level traces in video and frequency patterns in audio to find signals invisible to humans.” The firm says training its algorithms on massive datasets of both authentic and generated media allows them to pick up the most subtle artifacts across multiple modalities.

“A secure Zoom bot streams video and audio to Reality Defender’s detection pipeline. Multiple models analyze each feed in parallel, trained to identify unique signals left by generative AI. The system ensembles these results into a confidence score, delivered instantly to the host dashboard.”

Reality Defender says the platform is built on a “microservices architecture” that allows it to process millions of chunks of audio and video data simultaneously. “That design means we can scale to thousands of calls without lag, and deploy new detection models quickly as generative AI evolves.”

For Reality Defender, the key takeaways are that detection is about forensic signals, rather than spotting “glitches.” The ability to keep pace and scale are also important, as is the ability to work across media, so that voice-only interviews are subject to the same rigor as video meetings, identifying and weeding out voice clones.

“Solely focused on deepfake detection,” says Reality Defender, “our platform works across video, audio and image, delivering results in seconds and integrating directly into tools like Zoom and Teams.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Senegal data breach disrupts national ID issuance

The issuance of national ID cards in Senegal recently got halted on a temporary basis after the government reported a…

 

World’s success in LatAm is based on dubious grounds, says digital rights activist

Digital identity project World has nearly 40 million app users and over 17 million verified humans – many of whom…

 

Wizz joins Tech Coalition to back up claims its safety measures prevent sextortion

Wizz, which brands itself as “the social discovery app for GenZ to build community globally,” has announced in a release…

 

Djibouti unveils biometric mobile ID to enhance access to public services

Digital transformation efforts in Djibouti have gone a notch high with the launch of a biometrics-based mobile ID that seeks…

 

ICO hits Imgur owner with £250K fine for mishandling children’s data

Imgur, which suspended access for users in the UK in September 2025 over concerns about a forthcoming fine from the…

 

Discord to make teen settings default, Australia wants a word with Roblox

Discord is rolling out “teen-by-default” settings for all users globally. A release from the messaging platform says “all new and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events