India introduces new deepfake rules for social media

India has tightened rules related to deepfakes on social media, requiring platforms to identify, label, and trace AI-generated content using deepfake detection tools, and to ban certain synthetic material, including impersonations that use a real person’s identity or voice.
According to the amendments to the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, published on Tuesday, all AI-generated or altered material must carry a clear disclosure. Platforms are required to embed persistent metadata and unique identifiers to allow tracing of the content’s origin and the tools used to create it.
Large social media companies must follow even stricter rules, requiring users to declare that their content is AI-generated before it is published. IT intermediaries will need to use automated tools to verify whether these declarations are true by analysing the content’s format, source and characteristics.
The amended regulation represents the country’s first dedicated deepfake law, introducing a statutory definition of “synthetically generated information,” or SGI.
Some forms of synthetic content are completely forbidden, including malicious impersonation, false electronic records, intimate imagery produced without consent and content tied to serious crimes, including child sexual abuse material.
At the same time, the regulation makes exceptions for edits like color correction, noise reduction, compression, or translation, provided they don’t alter the underlying meaning. Content that is clearly hypothetical or illustrative is also excluded.
The new rules come as India grapples with an explosion of deepfakes on social media, including deepfake videos of famous Indians, which have gone viral.
The amended IT Rules are another tool in India’s government regulatory arsenal, alongside the Digital Personal Data Protection Act 2023 and the Bharatiya Nyaya Sanhita 2023. While the former imposes penalties on entities that process personal data without meaningful consent, the latter criminalizes the spread of false or misleading statements intended to incite public fear or mischief.
The rules come into force on February 20th, giving tech companies little time to adjust. The new obligations may have an impact beyond India, as large companies such as YouTube and Meta could transfer some of these moderation rules to other markets, according to TechCrunch.
Regulators have also shortened compliance timelines, requiring service providers to fulfill some government takedown orders within just three hours, which was reduced from 36 hours. Platforms are required to acknowledge user complaints within two hours and provide a resolution within seven days.
Not everyone, however, feels at ease with the new rulebook. The Indian government has previously been criticized for its broad powers to remove content on social platforms.
According to digital rights group Internet Freedom Foundation, the Information Technology (IGDME) Amendment introduces “severe digital rights violations that fundamentally undermine constitutional protections.”
“The notified rules drastically compress content removal timelines, transforming intermediaries into rapid-fire censors,” says the New Delhi-based organization.
Article Topics
biometrics | deepfake detection | deepfakes | India | regulation | social media






Comments