FB pixel

Deepfakes, social engineering create potent elixir for fraud: Reality Defender

Bad intentions get new disguises with easily-generated synthetic media
Deepfakes, social engineering create potent elixir for fraud: Reality Defender
 

A webinar hosted by Reality Defender begins with the statement of a truth that becomes more apparent every day: “synthetic media is no longer a fringe concern. This is a mainstream issue.” The statement is from retired U.S. Lt. Gen. Robert Ashley, who serves as Reality Defender’s federal advisory board member.

The deepfakes panel covers macro trends, threats to law enforcement and government, a technical breakdown of the threats and practical solutions. It has significant implications for biometrics and identity – but also the general ability to know what’s real and what’s not.

Until recently more of a “science experiment,” deepfakes are “now part of a cyber criminal’s attack vector,” says Alex Lisle, Reality Defender’s CTO. He notes that social engineering, or “convincing someone to do something that they really shouldn’t do,” is always one of the most successful attack vectors that a cyber criminal has.

“Now you couple that with the fact that with commoditized hardware, a gaming laptop and some open-source software, you now have an incredible leveraging tool for those sorts of attack vectors.” Large scale automated social engineering attacks are now possible, for both voice and video. Notably problematic recently is the spike in deepfake employment and hiring fraud.

Catherine Ordun of Booz Allen talks about the technical and economic changes that have enabled us to get to this point. “Nowadays, you can buy RTX 2080 GPUs that I had to buy back in the day for 1,200 dollars  – you can get them for 400 bucks now.” Combine a few, and you have a diffusion engine powerful enough to produce deepfakes.

The architecture of the algorithm has changed, but so too has the computational capability to recombine components.

On the other hand, old habits die hard, and making people hyperaware of the quality of their reality is not easy. Lisle points out that our society behaves (for good reason) that it can believe what it sees. But deepfakes are undermining that fundamental truth. This is especially true when organizations are processing more data than ever before – and using outdated tech stacks to do so.

Could we get to a point at which deepfakes are impossible to detect? Will we need to monitor breathing patterns and corneal speculation to know if it’s really grandma on the phone?

Agentic AI could be part of the solution. Ordun says that there will always be metrics that AI can detect, even if they’re invisible to the human eye; many are yet to be explored. Lisle adds that, while some nation states can leverage significant computing power, most cybercriminals don’t have the juice to generate pristine deepfakes, since they’re mostly using commercial-grade foundational models that are designed to be good enough to fool a human.

Social engineering: deepfake fraud puts new mask on old chicanery

Ultimately, technology is both vulnerability and shield, but it’s separate from the fact that people behave poorly. A post from Reality Defender goes deeper into social engineering, which has been supercharged by the emergence of AI.

“For centuries, attackers have relied on persuasion, deception, and emotional manipulation to get past human defenses long before breaching technical ones,” says the post. Deepfakes exploit trust, which can make it much easier to convince people to do things they shouldn’t. “Deepfakes aren’t just digital illusions,” it says; “they’re deception tools. When paired with social engineering, they make scams feel personal, urgent, and real.”

“Attackers no longer need direct network access; a believable voice note, convincing video call, or forged image can be enough to manipulate human trust. Detection can’t wait for forensic review – it must happen in real time, during the interaction itself.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics back digital government gains around the world

Digital government was in the spotlight this week on Biometric Update with the release of the OECD rankings and a…

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events