FB pixel

Deepfakes, social engineering working together to ‘break human judgment’

Sophistication of tools means you should not believe your eyes, say experts
Deepfakes, social engineering working together to ‘break human judgment’
 

One of 2025’s top tech trends looks just like your boss, or maybe your mom, or Aunt Stella. Deepfakes have entered the mainstream, as the product of generative AI tech that makes creating them cheap, easy and – for fraudsters – potentially very lucrative. Realistic video of fake people can be weaponized for scams, espionage and political manipulation. Fake employees have become a scourge to remote job interviews. So-called nudify apps have become such a problem that the UK government pledged, today, to ban them.

It is no longer a given conclusion that the person on your screen is a real human, and that’s a much bigger problem than a few extra fingers.

“When people think about deepfakes, they often picture fake videos or voice-cloned calls,” says Arif Mamedov, CEO of identity provider Regula Forensics, in an article from TechNewsWorld. “In reality, the bigger risk runs much deeper. Deepfakes are dangerous because they attack identity itself, which is the foundation of digital trust.”

Mamedov identifies three significant risks associated with deepfakes. Authentication is easily compromised in systems that rely on static or replayable signals – which is to say, those lacking liveness detection. The speed at which fraud can scale is another; Mamedov asserts that generative AI tools have turned fraud into “an industrial process.” Finally, deepfakes create false confidence. “They often pass existing controls, so organizations think they’re protected while fraud quietly grows.”

TechNewsWorld quotes several executives from the biometrics and digital identity space in its look at deepfakes. Most express some variant on the same theme. Mike Engle, chief strategy officer for 1Kosmos, warns that “AI can now convincingly impersonate executives, employees, job candidates, or customers using synthetic voices, faces and documents, allowing attackers to bypass onboarding and help desk and approval workflows that were never designed to detect manufactured identities. Once a fake identity is enrolled, every downstream control — MFA, VPNs, SSO — ends up protecting the attacker instead of the organization.”

David Lee, field CTO of Saviynt, says most people still naively assume that authority is legitimate. In this, deepfakes “break human judgment.”

“A believable executive voice can authorize payments, override processes, or create urgency that short-circuits rational decision-making before security controls ever come into play,” Lee says.

James E. Lee, president of the Identity Theft Resource Center (ITRC), notes that deepfake-driven scams can be particularly dangerous for businesses operating at a thin margin. Ruth Azar-Knupffer, co-founder of VerifyLabs, points out that “the proliferation of digital communication, such as video calls and social media, has expanded attack opportunities, making deepfakes a growing vector for scams and disinformation.”

Emotional alerts a better flag than audio, video tells

Mamedov says the quality of deepfakes now exceeds what many verification systems were built to handle, and generating them in hordes can be as easy as playing a video game. “What used to be an individual effort to craft a convincing deepfake is now a plug-and-play ecosystem. Fraudsters can buy complete ‘persona kits’ on demand: synthetic faces, deepfake voices, digital backstories.” Regula’s data shows that about one in three organizations has already experienced deepfake fraud. “Identity spoofing, biometric fraud, and deepfakes now sit firmly in the mainstream fraud playbook.”

As such, the need for up-to-date training is urgent. Organizations like Florida-based KnowBe4 are now coaching businesses on how to trust their instinct when it comes to GenAI – especially when their eyes are no longer reliable, as tells like weird mouths or choppy voices get ironed out of deepfake outputs.

KnowBe4 Chief Human Risk Management Strategist Perry Carpenter says “the single best thing that anybody can do is if they feel like there’s an emotion that’s being pulled in some way, some emotional lever that’s being touched, whether that is fear or urgency or authority or hope or anything else, that should actually be a signal for them to slow down, and start to analyze the story, the thing that’s being asked of them, and ask does it raise any red flags?”

“The last thing I want somebody to do is to believe that there will always be a visual or audio tell that they can figure out,” he says. “The best thing is always going to be, am I feeling manipulated in some way? Is this asking me to do something out of the ordinary? Is it touching on an emotion in some way? Then how can I verify this through another channel?”

James E. Lee says part of the problem is lacklustre verification technology. “Deepfakes aren’t the core problem. They’re a stress test. They expose how many organizations still rely on recognition instead of verification.”

“The long-term solution isn’t better human detection. It’s treating identity as something that must be explicitly validated and continuously enforced by systems.”

Fake Sam Altman to star in real documentary

The world, in short, is becoming much less certain, and a lot weirder. Witness the forthcoming documentary, previewed by Wired, about a filmmaker who tried to meet Sam Altman, but was denied – so, he used ChaGPT to generate a fake Sam Altman.

Deepfaking Sam Altman, directed by Adam Bhala Lough, will be released in January.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

 

87% of failed biometric verifications in Southern Africa due to AI spoofing: Smile ID

A new report spotlights deepfake fraud posing an acute problem for Africa. Digital identity, banking and e-government are being used…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events