Deepfake penetration outpacing security preparedness: GetReal Security

GetReal Security’s new Deepfake Readiness Benchmark Report confirms what many businesses are saying: convincing AI deepfakes are no longer a future scenario or a novelty shared on social media. They’re in the hiring pipeline, on Zoom calls and inside IT help desks. They’re fooling biometric authentication. They might be your supervisor.
The firm’s data shows that eight out of ten organizations encounter AI deepfakes or impersonation attempts at least occasionally, and 45 percent encounter them frequently. Forty-one percent of organizations with 1,000 or more employees report having hired and onboarded a fake job candidate or imposter.
The other end of the spectrum is equally notable: only 1.5 percent of surveyed organizations say they have never encountered an AI deepfake.
The numbers emphasize the need for a modified security stance and strengthened deepfake detection, but few are acting with urgency. “Enterprises acknowledge the risk, but underestimate their vulnerability,” says GetReal’s report. “Just over half of enterprises are adapting their identity and access management strategies for GenAI-powered threats.” Most think they’re already in pretty good shape, and won’t know otherwise until their deepfake systems manager absconds to North Korea with a boatload of assets.
Moreover, deepfakes aren’t just concentrated on one attack vector. With penetration across voice, video, synthetic identity fraud, credential harvesting and more, GenAI attackers are thoroughly integrated across the identity security fabric.
GetReal: what know you of ready?
GetReal’s white paper underlines the “disconnect between awareness and true preparedness” that plagues most organizations, many of which continue to rely on employee training as a core component of a deepfake response plan – an inadequate security pillar in the context of biometric deepfakes.
Another lapse is reliance on point-in-time identity verification. “Point-in-time authentication and validation rely too heavily on static checks that can’t keep pace with quickly evolving, AI-driven impersonation. Because biometric authentication can now be convincingly spoofed by deepfakes, ongoing identity protection and validation become essential.”
Identity and Access Management (IAM), says GetReal, must evolve in tandem with the deepfake threat. “As deepfake tools advance faster than IAM strategies, enterprises that fail to adopt continuous monitoring and validation of remote likenesses will face a widening gap that attackers will exploit to launch impersonation attacks, carry out fraud, and gain access to corporate systems and sensitive data.”
“Organizations that act now to shift from point-in-time checks to ongoing identity integrity monitoring better position themselves for resilience against AI-powered attacks.”
Article Topics
AI fraud | biometrics | continuous authentication | deepfake detection | deepfakes | GetReal Security | identity security







Comments