FB pixel

US security agencies tout biometrics, liveness detection to defend against deepfakes

US security agencies tout biometrics, liveness detection to defend against deepfakes
 

Threats from deepfakes are exponentially increasing and organizations must be ready to introduce new technologies to tackle them, including biometric identity verification and liveness detection, a new report from U.S. security agencies says.

The report, titled Contextualizing Deepfake Threats to Organizations, was published by the National Security Agency (NSA), the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) and gives recommendations for beefing up security against synthetic media, such as deepfakes.

“Many organizations are attractive targets for advanced actors and criminals interested in executive impersonation, financial fraud, and illegitimate access to internal communications and operations,” the report warns.

The research comes as more countries are coming up with guidelines to protect themselves from the deepfake threat. Last week, Europe’s top technical standards agency ETSI released a report on the risks of using artificial intelligence to manipulate digital identity representations.

The U.S. report recommends fighting synthetic media by introducing real-time verification, passive detection and protecting high-priority officers and their communications.

Considering the rapid improvements in generative AI and real-time rendering, identity verification should be introduced for real-time communications which will require testing for liveness. Some of the companies working on biometric liveness tests to detect attacks powered by virtual injection techniques include ID R&D, FaceTec and iProov, the report notes.

Those who work with sensitive communication, especially financial transactions, should include verification in their workflow, from biometrics to multi-factor authentication (MFA), one-time generated passwords, PINs and entering personal details.

Another recommendation is introducing passive detection of deepfakes, forensic analysis that verifies the authenticity of previously created media. Organizations should also be prepared for deepfake attacks by sharing information and planning for and rehearsing responses to exploitation attempts.

Although media headlines have been highlighting the danger of deepfakes to election processes and the risk of disinformation, U.S. security agencies believe that organizations face the biggest risks. Synthetic media could be used to damage brands, impersonate company leaders or obtain access to sensitive information. Phishing using deepfakes will be an even harder challenge than it is today, the agencies say.

Several private and public initiatives have been set up across the U.S. to tackle the growing problem of deepfakes. The DARPA Semantic Forensics project includes Nvidia, PAR Government Systems, SRI International and other research institutions. Another is the Center for Identification, Technology Research (CITeR) funded by the National Science Foundation and other partners. The Air Force Research Lab (AFRL) recently awarded a contract to DeepMedia to develop deepfake detection capabilities.

Deepfake detection tools have been fielded by several companies, including Microsoft, Intel, and Google. Adobe launched its Content Authenticity Initiative (CAI) in 2019 which has developed the Coalition. The Content Providence and Authenticity (C2PA) project combines the Adobe-led Content Authenticity Initiative (CAI) with Project Origin, an initiative led by Microsoft and BBC created to tackle disinformation in digital news.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics