FB pixel

European standards agency releases report on deepfakes

Categories Biometric R&D  |  Biometrics News
European standards agency releases report on deepfakes
 

As deepfakes flourish across the world, countries are trying to come up with regulations that would reign in the threats they pose.

The latest efforts come from Europe’s top technical standards agency, which released a report on the risks of using artificial intelligence to manipulate digital identity representations. This not only includes fraud cases such as those in biometric authentication but also the potential to sow confusion, undermine elections and create conflicts.

The report, titled ETSI GR SAI 011, was released this week by the Securing AI group of the European Telecommunications Standards Institute (ETSI). As a not-for-profit organization, ETSI works with the European Commission and the European Free Trade Association (EFTA) on setting technical standards.

“AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and, in extreme cases, can even create fake multimedia data from scratch,” Scott Cadzow, Chair of the ETSI Securing Artificial Intelligence Industry Specification Group, says in a release.

Deepfake attacks in different media formats, such as pictures, videos, audio and text, can be used to influence public opinion. A notorious case is the March 2022 deepfake video of Ukrainian president Volodymyr Zelenskyy announcing the country’s capitulation. They can also be used in personal attacks aimed at ruining the victim’s reputation or humiliating them such as faked sexually explicit videos.

But ETSI also highlights that deepfake attacks are targeting remote biometric identification and authentication.

Remote identification through video is used in many European countries by banks to open accounts for customers and ensure compliance. Speaker recognition systems are also used to authenticate customers requesting transactions. The security level of these procedures and their resulting susceptibility to attacks using manipulated identity representations varies significantly, the report warns.

Attacks may involve biometric data from third persons obtained without their knowledge, or rely on purely synthetic data. In August, the Hong Kong police arrested scammers who used doctored images and stolen ID cards to deceive banks.

Deepfake attacks also include social engineering such as the so-called CEO fraud attack (also called business email compromise ) in which an attacker impersonates an official person or a superior and requests a money transfer. A survey from ID R&D, for instance, showed that 40 percent of businesses or their customers have already encountered deepfake attacks.

“Deepfakes pose a complex problem, for which there is no panacea but which can best be combated by a combination of measures on various levels,” the report says.

ETSI proposes several solutions to the deepfake scourge, including educating and raising awareness and introducing regulation that requires marking manipulated identity representations. On a technical level, researchers can use detection methods such as media forensics or apply AI systems trained to spot manipulated content.

Attacks on authentication methods, including biometrics, can be addressed by decreasing the chance that fake content will be successful, the agency notes. This strategy includes introducing a high-level challenge-response protocol.

In remote identification through video, this could mean requiring the person to perform specific movements, move objects, or produce other types of responses. In voice authentication, this may be asking the person to speak words that are hard to pronounce and with which audio generation methods struggle. Among other strategies is measuring the delay in responses as a large delay could be a hint that the computer is processing its response.

“A more robust way to address the risk may be for companies and organizations to build robust processes, where important decisions and high-value transactions are not taken on the basis of implicit biometric authentication, but instead are always confirmed using a standard procedure involving multi-factor authentication,” the report concludes.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

UK online child safety rules finalized by Ofcom ahead of July deadline

New rules have been set for protecting UK children from online harms with the publication of the Protection of Children…

 

IAM funding deals and dev challenge aim to solve enterprise identity threat

Identity security is under grave threat. For all of its advantages, digital identity represents a vulnerability that can be leveraged…

 

WISeKey in talks with several African govts for digital ID systems rollout

Swiss digital ID provider WISeKey has announced that it is negotiating with an undisclosed number of African governments for the…

 

Biometrics startup gets accelerator backing for Africa, Middle East expansion

Startup Vove ID has secured funding from The Baobab Network to support an expansion of its biometric KYC and AML…

 

NHIs see new funding, products and security approaches

Non-human identities (NHI) like AI agents now outnumber human users in many organizations. Companies are coming up with products to…

 

White House includes NSF research on deepfakes among threats to free speech

The U.S. National Science Foundation (NSF) has new priorities, and while some might call them great priorities – tremendous priorities,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events