South Korea data regulator funds deepfake detection, investigates DeepSeek

South Korea is deepening investment in deepfake detection, and also looking into one of the potential sources of misleading and fraudulent content.
The country’s Personal Information Protection Commission (PIPC) has set aside 1.29 billion won (around US$925,770) to strengthen the detection system for unstructured data. Unstructured data refers to information such as voice, video, and images with no defined structure, all of which could be used in making deepfakes.
South Korea saw a surge in deepfakes last year, with the illegal synthesis of people’s faces and information into images or videos proving a challenge for existing detection systems. Responding to this growing issue, the commission will also employ machine learning technology into the detection system which previously focused on keywords and text.
“We will establish a system that can identify personal information included in images or videos through the approved supplementary budget, minimizing personal information leaks and illegal distribution and preventing secondary damage,” a commission official was quoted as saying in Chosun Biz.
The commission will need to be rigorous especially after researchers from South Korea and Australia recently analyzed leading deepfake detectors and found them wanting. The team found that all of the deepfake detectors failed in tests applied to real-world content. The researchers concluded that deepfake detectors will need to incorporate “a range of data sets including audio, text, images, and metadata, as well as using synthetic data and contextual analysis” to improve their effectiveness.
The PIPC also met with officials from the Trump administration during the International Association of Privacy Professionals (IAPP) Global Privacy Summit in Washington D.C. The parties discussed data and personal information policy directions in the age of AI. The current U.S. administration has received thousands of comments with many concerned over its AI Action Plan.
PIPC takes issue with DeepSeek’s cross-border data, privacy policies
The PIPC has also unveiled the findings of an investigation into DeepSeek, the Chinese-made Large Language Model or AI chatbot. DeepSeek launched its services in South Korea on January 15 but provided its privacy only in Chinese and English. DeepSeek temporarily suspended new downloads on Apple’s App Store and Google Play while it updated its service.
The commission found DeepSeek’s privacy policy to have insufficient transparency as required by the Personal Information Protection Act (PIPA), including lack of information on procedures and methods regarding personal data destruction. DeepSeek also transferred users’ personal data to servers located in China and the U.S., but failed to obtain separate consent from users regarding cross-border data transfer, the PIPC found, or disclose this information in its privacy policy when it launched its services in Korea.
DeepSeek also transferred data to Beijing Volcano Engine Technology Co., Ltd. (Volcano) until April 10. The company said that Volcano is a subsidiary of ByteDance. However, Volcano is an independent corporation.
Since the commission’s analysis was disclosed, DeepSeek has added opt-out features so users can decline providing user input for AI development and training, and added an age verification procedure to safeguard children. The PIPC has told DeepSeek to enhance transparency and that it must have a “robust” legal bases for cross-border data transfer. The company is expected to destroy user-entered data transferred to Volcano’s server immediately, and to disclose its privacy policy in Korean. In addition, it recommended DeepSeek upgrade its overall safety measures for its data processing system and to designate a domestic agent. DeepSeek has 60 days to inform the PIPC of its implementation results.
Article Topics
data protection | deepfake detection | DeepSeek | Personal Information Protection Commission (PIPC) | South Korea
Comments