Daijiworld Media Network – New Delhi
New Delhi, Feb 10: The Ministry of Electronics and Information Technology (MeitY) on Tuesday notified a stringent new framework declaring several categories of artificial intelligence (AI) and synthetically generated information (SGI) as illegal under Indian law, bringing such content firmly within the existing legal framework governing online offences.
Under the amended rules, AI-generated content involving sexual abuse material, non-consensual intimate images, obscene or sexually explicit content, fake documents or electronic records, as well as content used for fraud, harassment, child abuse, misinformation or other criminal activities, will be treated at par with other illegal online content. Users found violating the provisions may face criminal action under laws including the Bharatiya Nyaya Sanhita, 2023.

The framework has been notified through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
As per the new rules, all AI-generated or synthetically created content must be clearly and prominently labelled as “synthetically generated”. The label must be easily noticeable and include permanent metadata or technical markers along with a unique identifier that can trace the content back to the platform or tool used. Users will not be permitted to remove or conceal such labels or metadata.
The amendments also mandate digital platforms to deploy appropriate technical measures to prevent the creation and circulation of illegal synthetic content, including child sexual abuse material, fake electronic records and content that deceptively portrays real persons or events.
The move follows recent controversy involving Elon Musk-led AI chatbot Grok on X, which came under scrutiny in India over the generation of sexualised and obscene images of women and minors.
Under the revised rules, platforms such as X, Meta, Instagram and other content-hosting services cannot claim that AI-generated content falls outside legal scrutiny. Such platforms must remove or block illegal synthetic content within prescribed timelines to retain safe harbour protection under Section 79 of the IT Act.
Social media platforms have also been directed to inform users at least once every three months about their responsibilities, warning them that accounts may be suspended or terminated for violations and that sharing illegal content could attract penalties or imprisonment.
Special warnings have been mandated for platforms offering AI tools such as ChatGPT, Gemini and Grok, informing users that misuse of AI tools may lead to punishment under criminal law, child protection laws, election laws and laws related to obscenity, trafficking and harassment.
One of the key changes introduced through the 2026 amendment is the sharp reduction in response timelines. Platforms must now comply with government or police orders within three hours, while grievances must be addressed within seven days. Urgent cases must be resolved within 36 hours, and certain takedown actions completed within two hours.
Large social media platforms with millions of users will face additional obligations, including requiring users to declare whether uploaded content is AI-generated and deploying technical tools to verify such declarations. Platforms that knowingly allow or promote illegal synthetic content or fail to act despite awareness will face legal consequences.
The rules, however, clarify that good-faith digital activities such as routine editing, colour correction, noise reduction, subtitles, translation, document formatting, educational content and accessibility tools will not be treated as synthetically generated content, provided they do not mislead users or create false records.