European standards agency releases report on deepfakes
As deepfakes flourish across the world, countries are trying to come up with regulations that would reign in the threats they pose.
The latest efforts come from Europe’s top technical standards agency, which released a report on the risks of using artificial intelligence to manipulate digital identity representations. This not only includes fraud cases such as those in biometric authentication but also the potential to sow confusion, undermine elections and create conflicts.
The report, titled ETSI GR SAI 011, was released this week by the Securing AI group of the European Telecommunications Standards Institute (ETSI). As a not-for-profit organization, ETSI works with the European Commission and the European Free Trade Association (EFTA) on setting technical standards.
“AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and, in extreme cases, can even create fake multimedia data from scratch,” Scott Cadzow, Chair of the ETSI Securing Artificial Intelligence Industry Specification Group, says in a release.
Deepfake attacks in different media formats, such as pictures, videos, audio and text, can be used to influence public opinion. A notorious case is the March 2022 deepfake video of Ukrainian president Volodymyr Zelenskyy announcing the country’s capitulation. They can also be used in personal attacks aimed at ruining the victim’s reputation or humiliating them such as faked sexually explicit videos.
But ETSI also highlights that deepfake attacks are targeting remote biometric identification and authentication.
Remote identification through video is used in many European countries by banks to open accounts for customers and ensure compliance. Speaker recognition systems are also used to authenticate customers requesting transactions. The security level of these procedures and their resulting susceptibility to attacks using manipulated identity representations varies significantly, the report warns.
Attacks may involve biometric data from third persons obtained without their knowledge, or rely on purely synthetic data. In August, the Hong Kong police arrested scammers who used doctored images and stolen ID cards to deceive banks.
Deepfake attacks also include social engineering such as the so-called CEO fraud attack (also called business email compromise ) in which an attacker impersonates an official person or a superior and requests a money transfer. A survey from ID R&D, for instance, showed that 40 percent of businesses or their customers have already encountered deepfake attacks.
“Deepfakes pose a complex problem, for which there is no panacea but which can best be combated by a combination of measures on various levels,” the report says.
ETSI proposes several solutions to the deepfake scourge, including educating and raising awareness and introducing regulation that requires marking manipulated identity representations. On a technical level, researchers can use detection methods such as media forensics or apply AI systems trained to spot manipulated content.
Attacks on authentication methods, including biometrics, can be addressed by decreasing the chance that fake content will be successful, the agency notes. This strategy includes introducing a high-level challenge-response protocol.
In remote identification through video, this could mean requiring the person to perform specific movements, move objects, or produce other types of responses. In voice authentication, this may be asking the person to speak words that are hard to pronounce and with which audio generation methods struggle. Among other strategies is measuring the delay in responses as a large delay could be a hint that the computer is processing its response.
“A more robust way to address the risk may be for companies and organizations to build robust processes, where important decisions and high-value transactions are not taken on the basis of implicit biometric authentication, but instead are always confirmed using a standard procedure involving multi-factor authentication,” the report concludes.
Article Topics
biometric authentication | biometric identification | deepfakes | ETSI | regulation | standards
Comments