China’s draft rules on AI ‘virtual humans’ target biometric deepfakes

China has issued a new draft regulation on “digital virtual humans,” or AI chatbots, introducing clear labeling requirements and mandating explicit consent for using a person’s likeness, voice, or other personal data to create a virtual representation. The regulation would also ban the use of digital humans to “evade facial recognition, voice recognition, or other identity authentication mechanisms.”
The Cyberspace Administration of China (CAC) published the draft rules, titled Measures for the Management of Digital Virtual Human Information Services, last Friday. Public consultations will run until May 6th.
Digital virtual humans are defined as virtual figures that exist in nonphysical environments and simulate human appearance and behavior using technologies such as computer graphics, digital image processing, and artificial intelligence. In other words, personas used by chatbots based on LLMs like ChatGPT and Grok, and the myriad applications based on them.
The clauses on consent for use of likeness and banning bypass attacks against biometric authentication systems clearly targets the spread of deepfakes.
The rulebook introduces special protections for minors, prohibiting the offering of virtual intimate relationships to children under 18, including simulated family members or romantic partners, as well as any services that may trigger excessive spending, promote harmful conduct, or compromise their physical or mental well-being.
The draft also introduces measures to prevent the spread of rumors, insults and defamation through digital humans. Organizations and individuals should resist the creation or distribution of content containing sexual innuendo, cruelty, horror, or discriminatory material, according to the draft (translation).
The regulation is part of China’s efforts to protect personal data and rein in AI technologies. The draft law builds on CAC’s 2025 Measures for Labeling of AI-Generated Synthetic Content, which require visible and technical labels for AI-generated text, images, audio and video.
At the same time, measures state that the “application of digital virtual human services is encouraged in every field” and that the state aims to support research and development of this technology. The country has been actively promoting AI development, including pledging to integrate it into its economy in its newest five-year plan.
The regulation provides a balanced approach between development and security, guiding the industry while addressing key risks, according to Du Cuilan, deputy director of the National Computer Network Emergency Response Technical Team/Coordination Center of China.
“Digital humans are highly deceptive and realistic,” Du told China Daily. “There are significant risks of them being used to generate harmful content, spread rumors, incite crime, or maliciously induce consumption.”
The new law is also aimed at preserving the government’s tight grip over the country’s information space and upholding its “core socialist values.”
The cyberspace watchdog wants to prohibit the use of digital virtual human services for generating and disseminating content that endangers national security or incites subversion of state sovereignty or the overthrow of the socialist system.
Obscenity, pornography, gambling, violence are also prohibited and so is content advocating terrorism, extremism and “historical nihilism,” a term used by the Chinese Communist Party (CCP) that describes viewpoints which go against the official state version of history.
Article Topics
AI | biometrics | chatbots | China | Cyberspace Administration of China (CAC) | deepfakes | regulation







Comments