TLDR: India’s Ministry of Electronics and Information Technology has introduced draft regulations requiring AI and social media platforms to prominently label all AI-generated content. This move aims to curb the spread of deepfakes, misinformation, and manipulation, particularly in the context of elections. The proposed rules mandate that labels cover at least 10% of visual content or the initial 10% of audio/video duration, placing accountability on both content creators and platforms.
In a significant step towards ensuring digital transparency and combating the escalating threat of misinformation, India’s Ministry of Electronics and Information Technology (MeitY) has unveiled new draft regulations mandating the clear labeling of all AI-generated content. These proposed amendments to the Information Technology (IT) Rules, 2021, are a direct response to the rapid proliferation and misuse of synthetic media, including deepfakes, which pose substantial risks of deception, financial fraud, and manipulation, especially concerning electoral processes.
The core of the new directive requires social media platforms and AI developers to ensure that any content created, generated, modified, or altered by AI tools, appearing authentic or real, is clearly identified. For visual content, the identifier or label must cover at least 10% of the total surface area of the display. Similarly, for audio or video content, the label should be present for the initial 10% of its duration. These labels are intended to be prominent, potentially including watermarks, digital signatures, or embedded metadata that are difficult to alter or remove, ensuring traceability and user awareness.
Under the proposed framework, accountability is dual-pronged. Content creators will be obligated to declare if their uploaded material is synthetically generated. Concurrently, social media platforms, particularly significant intermediaries with over 50 lakh registered users, will bear increased responsibility. They are required to deploy ‘reasonable and proportionate technical measures’ to verify such claims and proactively identify and flag synthetic content. This effectively shifts a greater onus onto Big Tech companies, which often have substantial investments in AI technologies themselves.
IT Minister Ashwini Vaishnaw highlighted that the move addresses the growing misuse of AI for impersonation, privacy breaches, and the spread of misinformation. MeitY Secretary S. Krishnan further clarified the intent behind the regulations, stating, ‘You can post AI content — just label it clearly.’ This underscores the government’s aim for transparency rather than outright censorship, allowing for innovation while safeguarding public trust in digital information.
Also Read:
- AI-Generated Content Erodes Public Trust, Posing Unprecedented Challenges for Digital Platforms
- AI’s Transformative Role in Pharmaceutical Innovation: Navigating Progress and Policy
The need for such regulations has become critical as AI-generated deepfakes are increasingly weaponized for political propaganda, scams, and character assassination. With nearly a billion internet users, India faces a particularly high risk from manipulated media, where misinformation can inflame communal tensions or disrupt democratic processes. The draft rules are open for stakeholder feedback until November 6, 2025, before finalization. This policy aligns India with emerging global AI governance models seen in regions like the EU and countries like China, reflecting a worldwide effort to manage the challenges posed by advanced AI technologies.


