FB pixel

Deepfakes force enterprises to rethink cybersecurity

Deepfakes force enterprises to rethink cybersecurity
 

Organizations must move beyond simple detection tools to defend against AI-generated impersonations and synthetic media attacks. As generative AI continues to evolve, enterprises must adopt a layered security approach, combining detection technology, verification procedures and provenance tools to defend against the flood of deepfake attacks.

Many existing detection tools remain imperfect: Digital forensics expert Hany Farid estimates that some deepfake detection  systems are only about 80 percent effective and often fail to explain how they determined whether an image or video is fake.

“There’s no explainability. You can’t go into a court of law or explain to the press or public why an image or video is real or fake,” Farid told Information Week.

At the same time, detection technology is facing the challenge of operating in real time and integrating with enterprise platforms such as Zoom or Google Meet where deepfake impersonations could occur.

A growing group of cybersecurity companies, including GetReal Security, Reality Defender, Deep Media and Sensity AI, is working to address synthetic media threats by analyzing details that are hard to see: Signals within digital media, visual and acoustic cues such as lighting consistency, shadow angles, voice patterns and facial movements. Environmental data, including location or IP information, can also help identify suspicious content.

However, detection must be part of a broader defense strategy. Organizations are increasingly using red-team exercises to simulate deepfake attacks and expose weaknesses in internal processes. Multi-factor verification, such as confirming requests through trusted call-back numbers or security questions, can also help prevent employees from acting on fraudulent communications.

Another emerging tool is digital provenance, which traces content back to its origin and records whether it has been altered. The Coalition for Content Provenance and Authenticity (C2PA), for example, embeds cryptographically signed metadata into files to track their creation and editing history.

Reality Defender: A single model cannot beat all deepfakes

Reality Defender, which reveals identity-based deception by verifying a person’s face or voice, has provided a technical breakdown of how to structure deepfake defenses for real-world deployment.

The company uses multiple detection models rather than a single scoring system. By analyzing signals across images, audio, and video, security teams can better identify synthetic elements and build targeted defenses, the U.S.-based firm explains in a blog post.

“A single model cannot catch every manipulation,” says the company. “Ultimately, enterprise deepfake detection isn’t a single score; it contains specialized signals configured for real-world scale and risk thresholds.”

True accuracy, bias and resilience can only be measured when a system continuously monitors massive volumes of media, it notes.

Enterprises are not the only ones that could use better techniques against synthetic media.

In February, Reality Defender conducted an experiment with NATO, introducing deepfakes into a realistic warfighting scenario to assess their impact on experienced military officials. The findings were dismal, according to the firm, reinforcing the need for an “urgent need for automated deepfake detection across the entire spectrum of military operations.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events