Deepfakes proliferate across Asia during busy election year

An editorial in South Korea’s Chosun Daily shows the scale of illegal deepfake activity ahead of April elections in the Republic of Korea – and beyond. According to the article, the National Election Commission (NEC) of South Korea detected 129 illegal AI deepfake posts over a 19-day surveillance period. Content included highly convincing deepfake video of candidates with altered speech or visuals, spread primarily through social media.
To make matters worse, says the piece, the relatively low sophistication and capacity of NEC’s deepfake monitoring system means there are probably many more deepfakes going undetected.
The 2022 local elections in Korea were already seeing deepfake content, including a fake video of President Yoon Suk-yeol making political endorsements. That year also saw an extortion case in Thailand involving deepfake police officers. In 2023 it was the Prime Minister and Deputy Prime Minister of Singapore afflicted, when their deepfake likenesses were used to promote cryptocurrency.
According to research from the Global Initiative Against Transnational Organized Crime, the massive increase in deepfake cases between 2022 and 2023 in the Asia-Pacific region was second only to North America. Deepfake fraud in Vietnam spiked 25.3 percent. Japan, at 23.4 percent, was not far behind. Biometric deepfake techniques, such as voice cloning and real-time generative video manipulation, are fueling the threat.
It is enough to give credence to Chosun Daily’s assertion that, should this technology impede free and fair elections worldwide, it could “not only bring about a democracy setback but also cause social chaos.”
Cue the regulators.
Bans, watermarks and AI-assisted deepfake detection among defenses employed
A Global Initiative article by Natnicha Surasit, “Rogue replicants: Criminal exploitation of deepfakes in South East Asia,” says that the Chinese government has banned the creation of deepfakes without user consent and now requires AI-generated content to be clearly identified. “South Korea has criminalized the distribution of deepfakes that may ‘cause harm to public interest’,” writes Surasit. “Australia aims to implement a number of guidelines, such as urging tech firms to label and watermark AI-generated content. And Thailand, the Philippines, Malaysia and Singapore have personal data protection laws to prevent exploitation.”
Back in South Korea, police are showcasing deepfake detection software developed with the National Office of Investigation (NOI). The Korea Times reports that the algorithmic software was trained on a database consisting of 5.2 million pieces of data from around 5,400 Koreans – unlike a previous model trained on Western data.
Numbers-wise, the NOI says it will take the software between five and ten minutes to discern whether a video is authentic, then immediately generate a results sheet. The probability of successful detection is around 80 percent – accounting for the police’s statement that they will use the data analysis to inform the investigation, rather than as direct evidence.
South Koreans go to the polls on April 10.
Comments