synthetic content regulation

India Decodes the Deepfake: Mandatory AI Labelling and 3-Hour Takedowns

The Indian government has mandated that every piece of AI-generated content be clearly marked to protect users from the rising tide of synthetic misinformation

The Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. These rules, set to take full effect on February 20, create a formal legal definition of “Synthetically Generated Information” (SGI) and impose the world’s strictest compliance timelines on social media giants like Meta, Google, and X.

Also Read: The Bharat Breakthrough: Sarvam AI Outshines Global Giants in India-First Tech

Defining the Boundaries of Synthetic Content

The new framework defines SGI as any audio, visual, or audio-visual information that has been artificially created or altered using a computer resource in a way that makes it appear authentic. This specifically targets deepfakes that depict real individuals saying or doing things they never did. However, the government has provided “good-faith” carve-outs for routine editing—such as colour correction, noise reduction, and translation—ensuring that creative professionals and accessibility tools are not unfairly burdened.

The Three-Hour Takedown and Compliance Squeeze

Perhaps the most aggressive component of the 2026 amendment is the drastic reduction in response windows. For years, platforms operated under a 36-hour window to act on lawful orders. Under the new rules, this has been slashed to just three hours for certain categories of unlawful content, including deepfakes that misrepresent identities or promote misinformation. Other grievance timelines have also been halved, with the standard 15-day disposal period for user complaints reduced to just seven days.

Mandatory Labelling and Provenance Markers

The era of “hidden” AI is officially over in India. Platforms are now legally required to ensure that all SGI is accompanied by a prominent, unambiguous label. These labels must be “persistent,” meaning they cannot be stripped away when a file is downloaded or reshared. Furthermore, platforms must embed metadata and unique identifiers, as well as provenance markers that trace the content back to the originating system. Once applied, these markers are “tamper-proof,” and intermediaries are strictly prohibited from removing or suppressing them.

User Declarations and Platform Accountability

The burden of transparency now starts at the point of upload. Significant Social Media Intermediaries (SSMIs) must now prompt users to declare whether their content is AI-generated before it goes live. But the government isn’t relying solely on an honour system; platforms must deploy “reasonable and proportionate” automated tools to verify these declarations. If a platform knowingly permits unlabelled synthetic content to circulate, it risks losing its Safe Harbour protection under Section 79 of the IT Act, exposing the company to direct legal liability for third-party posts.

Also Read: Beijing Launches First Pilot Platform to Bridge Humanoid Robot ‘Mass Production’ Gap

The Hinge Point

The 10 February 2026 notification marks the moment when the “Wild West” of generative AI in India is replaced by a regime of mandatory disclosure. This is the hinge point because it shifts the responsibility of truth from the viewer to the platform and the creator. The story changes here because synthetic media is no longer treated as a creative curiosity but as a potential digital weapon that requires strict regulation of synthetic content.

What can no longer remain the same is the ability for deepfakes to go viral before they can be debunked. By compressing the takedown window to three hours and mandating permanent “labels of origin,” the government has essentially built a digital firebreak. This marks the end of the “plausible deniability” era for Big Tech in India and the beginning of a period where digital authenticity is a legal requirement rather than a platform preference.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top