Deepfakes force enterprises to rethink cybersecurity

Organizations must move beyond simple detection tools to defend against AI-generated impersonations and synthetic media attacks. As generative AI continues to evolve, enterprises must adopt a layered security approach, combining detection technology, verification procedures and provenance tools to defend against the flood of deepfake attacks.
Many existing detection tools remain imperfect: Digital forensics expert Hany Farid estimates that some deepfake detection systems are only about 80 percent effective and often fail to explain how they determined whether an image or video is fake.
“There’s no explainability. You can’t go into a court of law or explain to the press or public why an image or video is real or fake,” Farid told Information Week.
At the same time, detection technology is facing the challenge of operating in real time and integrating with enterprise platforms such as Zoom or Google Meet where deepfake impersonations could occur.
A growing group of cybersecurity companies, including GetReal Security, Reality Defender, Deep Media and Sensity AI, is working to address synthetic media threats by analyzing details that are hard to see: Signals within digital media, visual and acoustic cues such as lighting consistency, shadow angles, voice patterns and facial movements. Environmental data, including location or IP information, can also help identify suspicious content.
However, detection must be part of a broader defense strategy. Organizations are increasingly using red-team exercises to simulate deepfake attacks and expose weaknesses in internal processes. Multi-factor verification, such as confirming requests through trusted call-back numbers or security questions, can also help prevent employees from acting on fraudulent communications.
Another emerging tool is digital provenance, which traces content back to its origin and records whether it has been altered. The Coalition for Content Provenance and Authenticity (C2PA), for example, embeds cryptographically signed metadata into files to track their creation and editing history.
Reality Defender: A single model cannot beat all deepfakes
Reality Defender, which reveals identity-based deception by verifying a person’s face or voice, has provided a technical breakdown of how to structure deepfake defenses for real-world deployment.
The company uses multiple detection models rather than a single scoring system. By analyzing signals across images, audio, and video, security teams can better identify synthetic elements and build targeted defenses, the U.S.-based firm explains in a blog post.
“A single model cannot catch every manipulation,” says the company. “Ultimately, enterprise deepfake detection isn’t a single score; it contains specialized signals configured for real-world scale and risk thresholds.”
True accuracy, bias and resilience can only be measured when a system continuously monitors massive volumes of media, it notes.
Enterprises are not the only ones that could use better techniques against synthetic media.
In February, Reality Defender conducted an experiment with NATO, introducing deepfakes into a realistic warfighting scenario to assess their impact on experienced military officials. The findings were dismal, according to the firm, reinforcing the need for an “urgent need for automated deepfake detection across the entire spectrum of military operations.”
Article Topics
AI fraud | deepfake detection | deepfakes | enterprise | explainability | generative AI | Reality Defender






Comments