4 min readNew DelhiUpdated: Feb 11, 2026 02:43 AM IST
India’s newly notified regulatory framework for social media companies is a mixed bag for them: on one hand the IT Ministry has diluted an earlier proposed requirement to display labels on content generated through AI, but on the other, they have been handed down significantly stricter timelines to takedown problematic content — from an earlier 36 hours to three hours now.
On Tuesday, the IT Ministry notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Under the fresh amendments, the government has removed an earlier proposal (from last October) to apply a label to content generated through AI tools to cover at least 10% of the space, but under the notified rules, that has been changed to place “prominently” visible labels.
The specific 10% requirement has been done away with. The changes will come into effect on February 20, the final day of the upcoming India-AI Impact Summit. A senior government official said that the change was made as tech companies, during consultations, flagged that the 10% label requirement would take away space from the actual piece of content, making it unappealing for viewers. However, the proposal had also courted pushback from tech companies.
The pitfalls of AI generated content were on full display earlier this year when Grok, the AI service of X (formerly Twitter) started creating, prompted by user requests, pictures of women in revealing clothing, casting a shadow on their dignity and privacy. The episode drew global criticism from governments, including India. Following the furore which led to Grok being banned in some countries, X modified its filters to prevent the creation of such images.
The requirement to takedown content quicker does not just apply to AI generated content but a wide range of content that the law deems unlawful. Platforms must now remove non-consensual intimate imagery within two hours, as opposed to 24 hours earlier, and other forms of unlawful content within three hours, from an earlier requirement to act on it within 36 hours.
This change is expected to receive wide pushback from big tech firms. They may raise concerns around heavy compliance burden, because if they fail to act on flat content within the new prescribed timelines, it might result in a loss of safe harbour — a critical immunity that protects them from legal actions for hosting user generated content.
The government official said that timelines have been compressed as they received feedback from several stakeholders that the timelines prescribed earlier were too long and did not prevent a content’s virality. “Tech companies have an obligation now to remove unlawful content much more quickly than before. They certainly have the technical means to do so,” the official said.
Story continues below this ad
“The amendments compress takedown timelines from 36 hours to just three hours, and this applies across all categories of content, not only synthetic or AI-generated material. In reality, there is often no clear or immediate test for illegality, and even law-enforcement communications do not always spell this out unambiguously. Requiring platforms to take definitive action within such a short window will be extremely difficult to operationalise and creates a real risk of over-censorship,” said Rahil Chatterjee, Principal Associate at the Delhi-based Ikigai Law.
As per the new rules, the definition of synthetically generated information now has carveouts for assistive and quality-enhancing uses of AI. Routine and good faith editing of audio, video or audio-visual content is excluded from the definition of SGI.
When an intermediary becomes aware that its service has been used for creating, disseminating or hosting SGI, it must take “appropriate” and “expeditious” action, including measures like immediate disabling of access or removal of such information, suspension or termination of user accounts, etc. In cases where an intermediary allows for creation, modification or sharing of SGI, it must employ “reasonable” and “appropriate” technical measures to not allow SGI that violates currently applicable laws or leads to misrepresentation of a real-world event or a person’s identity.
Big tech companies must also ensure that they require users to declare when information is SGI, deploy appropriate technical measures to verify the accuracy of such information and once verified, ensure that the same is clearly and prominently displayed with an appropriate label or notice.
Stay updated with the latest – Click here to follow us on Instagram
© IE Online Media Services Pvt Ltd

