India Tightens AI Content Rules with Labels and Faster Takedowns
India’s government has expanded its internet regulations to explicitly cover AI-generated and deepfake content, defining it as “synthetically generated information.” The new rules, effective February 20, 2026, require online platforms to prominently label AI-created audio, visual, and audio-visual content, embed permanent metadata, and maintain technical provenance to trace content origins. Routine editing or educational content is exempt.
Social media platforms must ensure users declare AI-generated content and deploy automated verification tools. Failure to comply could strip platforms of legal safe-harbor protections. The rules also prohibit AI content involving child sexual abuse material, non-consensual intimate imagery, false documents, or misleading depictions of real people or events, with strict penalties including account suspension and mandatory reporting to law enforcement.
Takedown timelines have been drastically shortened from 36 hours to three hours, with user grievance responses cut from 15 days to seven and certain actions required within two hours. These amendments mark a significant step in India’s effort to regulate online synthetic content and strengthen accountability for digital intermediaries.
Pic courtesy: google/ images are subject to copyright




