Digital ID verification increasingly targeted by AI deepfakes, advisory warns
While digital ID verification is gaining global momentum, it is also attracting new threats in the form of advanced deepfake software found on the dark web, writes cyber-intelligence firm Gemini Advisory.
Deepfakes are nothing new. Yet their demand is now on the rise as banks and other financial services are progressively relying on digital ID verification through selfies and videos to secure their automated services. The emergence of new deepfake technologies represents a vital threat to banks, cryptocurrency exchanges, and other businesses that are increasingly using image and video authentication to comply with KYC and AML regulations.
Gemini Advisory has been tracking deepfake services offered on the deep web to assess their supply and demand. Various listings found on online forums appear to be catering to a growing client base that seeks to defraud digital accounts by faking a users’ facial features with face-change technology to spoof biometric systems. These services also seem to be relatively affordable, with some videos being priced between $10 to $30 per minute. Aside from the deepfakes themselves, sellers also offer tutorials on how to circumvent a compromised account’s security measures.
While most of these illicit technology services were long rendered through basic solutions such as Adobe Photoshop’s Face Swap, more advanced applications have emerged and entered the market. This new generation of deepfake software uses AI, neural networks, and machine learning to create sophisticated image and video forgeries.
Among the two most widely discussed deepfake services are DeepFaceLab and Avatarify. Aside from their performance, these new solutions also promise to be easy-to-use. Furthermore, the open-source nature of the programs also allows developers to continue to improve the software to make deepfake detection more difficult.
According to Gemini, current deepfake detection software solutions are lagging behind these threats. In contrast to the rapidly evolving deepfakes, most current detection programs only perform at 65.18 percent precision. The lack of sufficient options and the growing threat of deepfakes will therefore likely spur the development of stronger software in the near future. One such program, Microsoft Video Authenticator could be the answer. The program, launched in September 2020, promises higher performance and sophistication in detecting image and video manipulations.
Biometrics providers have been applying lessons from the development of liveness detection technologies to thwarting deepfakes.
Deepfakes and manipulated chatbots face public backlash in South Korea
The South Korean public is pushing back against deepfake technology amid a series of recent controversies. Two prominent reasons for this outcry are the emergence of deepfake pornography and malfunctioning chatbots, writes DW.
Now, the situation has escalated to force intervention by the president. An online petition reading “Please strongly punish the illegal deepfake [images] that cause female celebrities to suffer” garnered more than 375,000 signatures. The campaign is urging the Korean government to act and to stop deepfake pornography that transplants celebrities into explicit images and videos.
In a similar development, Korean company Scatter Lab faced public criticism after its chatbot Lee Luda began to send insults and explicit messages. The AI-powered chatbot, which was originally envisioned to engage Facebook Messenger users, began to malfunction after some users hijacked and taught it insults and homophobic remarks.
As a result, Scatter Lab issued an apology and Lee Luda was taken offline after a mere 19 days. The company’s CEO Kim Jong-Yoon stated that more work is needed to teach Luda how to appropriately communicate. Yet, pornography and inappropriate language are not the only points of contention surrounding deepfake and AI technology in Korea.
Critics also questioned the ethics of bringing back deceased celebrities using holograms and AI. “Technology is both a blessing and a challenge in every society, so I think that is also the case here in Korea. Part of the challenge is related to the ethics that are involved in the digital transformation of our society,” said Dr. Park Saing-in, an economist working at Seoul National University.
DW reports that Korea recently outlawed deepfake videos, making the crime punishable with up to five years imprisonment and hefty fines.
Article Topics
AI | biometric liveness detection | biometrics | deepfakes | digital identity | facial recognition | identity verification | spoof detection
Comments