FB pixel

Deepfakes, social engineering create potent elixir for fraud: Reality Defender

Bad intentions get new disguises with easily-generated synthetic media
Deepfakes, social engineering create potent elixir for fraud: Reality Defender
 

A webinar hosted by Reality Defender begins with the statement of a truth that becomes more apparent every day: “synthetic media is no longer a fringe concern. This is a mainstream issue.” The statement is from retired U.S. Lt. Gen. Robert Ashley, who serves as Reality Defender’s federal advisory board member.

The deepfakes panel covers macro trends, threats to law enforcement and government, a technical breakdown of the threats and practical solutions. It has significant implications for biometrics and identity – but also the general ability to know what’s real and what’s not.

Until recently more of a “science experiment,” deepfakes are “now part of a cyber criminal’s attack vector,” says Alex Lisle, Reality Defender’s CTO. He notes that social engineering, or “convincing someone to do something that they really shouldn’t do,” is always one of the most successful attack vectors that a cyber criminal has.

“Now you couple that with the fact that with commoditized hardware, a gaming laptop and some open-source software, you now have an incredible leveraging tool for those sorts of attack vectors.” Large scale automated social engineering attacks are now possible, for both voice and video. Notably problematic recently is the spike in deepfake employment and hiring fraud.

Catherine Ordun of Booz Allen talks about the technical and economic changes that have enabled us to get to this point. “Nowadays, you can buy RTX 2080 GPUs that I had to buy back in the day for 1,200 dollars  – you can get them for 400 bucks now.” Combine a few, and you have a diffusion engine powerful enough to produce deepfakes.

The architecture of the algorithm has changed, but so too has the computational capability to recombine components.

On the other hand, old habits die hard, and making people hyperaware of the quality of their reality is not easy. Lisle points out that our society behaves (for good reason) that it can believe what it sees. But deepfakes are undermining that fundamental truth. This is especially true when organizations are processing more data than ever before – and using outdated tech stacks to do so.

Could we get to a point at which deepfakes are impossible to detect? Will we need to monitor breathing patterns and corneal speculation to know if it’s really grandma on the phone?

Agentic AI could be part of the solution. Ordun says that there will always be metrics that AI can detect, even if they’re invisible to the human eye; many are yet to be explored. Lisle adds that, while some nation states can leverage significant computing power, most cybercriminals don’t have the juice to generate pristine deepfakes, since they’re mostly using commercial-grade foundational models that are designed to be good enough to fool a human.

Social engineering: deepfake fraud puts new mask on old chicanery

Ultimately, technology is both vulnerability and shield, but it’s separate from the fact that people behave poorly. A post from Reality Defender goes deeper into social engineering, which has been supercharged by the emergence of AI.

“For centuries, attackers have relied on persuasion, deception, and emotional manipulation to get past human defenses long before breaching technical ones,” says the post. Deepfakes exploit trust, which can make it much easier to convince people to do things they shouldn’t. “Deepfakes aren’t just digital illusions,” it says; “they’re deception tools. When paired with social engineering, they make scams feel personal, urgent, and real.”

“Attackers no longer need direct network access; a believable voice note, convincing video call, or forged image can be enough to manipulate human trust. Detection can’t wait for forensic review – it must happen in real time, during the interaction itself.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events