In a landmark move to address the growing threat of deepfakes and misinformation, the Government of India has proposed that artificial intelligence (AI) and social media companies must clearly label AI-generated content. The draft proposal, released on Wednesday, aims to ensure transparency, traceability, and accountability in the digital ecosystem — a step that mirrors similar initiatives taken by the European Union and China.
With nearly 1 billion internet users, India faces a massive challenge in combating online misinformation across its diverse ethnic, linguistic, and religious communities. The rise of AI-generated fake content, especially during elections, has prompted urgent government action to safeguard social harmony and public trust.
New AI and Social Media Rules: What They Mandate
Under the proposed framework, AI-generated visuals and audio clips must carry visible labels covering at least 10% of the surface area of the display or the first 10% of the clip’s duration. This labelling requirement applies to all major AI and social media platforms, including OpenAI, Meta (Facebook and Instagram), Google, and X (formerly Twitter).
The Indian Ministry of Electronics and Information Technology (MeitY) stated that the rules are designed to “ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.” The government has invited public and industry feedback until November 6, 2025, before finalizing the policy.
Additionally, social media platforms must obtain a user declaration when content is uploaded — confirming whether it is AI-generated — and deploy technical verification tools to detect and label such material automatically. These measures aim to establish an accountability framework that holds both users and companies responsible for preventing the spread of false or misleading AI content.
Growing Concerns Over AI Misuse
The proposal reflects the government’s increasing concern over the potential misuse of generative AI to spread fake news, manipulate elections, and impersonate individuals. Officials warned that the use of deepfake technologies — capable of creating highly realistic but fabricated content — has grown rapidly in India’s online spaces.
“The potential for misuse of generative AI tools to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” the IT Ministry stated.
Indian courts are already witnessing high-profile legal battles involving deepfakes. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan have filed lawsuits against platforms that hosted AI-generated content infringing their likeness and intellectual property, including challenges to YouTube’s AI training policies.
A Global First: India’s Quantifiable Labelling Standard
Experts say India’s proposal is among the world’s first to quantify visibility standards for AI-generated content. Dhruv Garg, founding partner at the Indian Governance and Policy Project, said, “The rule about covering 10% of surface area is among the first explicit global attempts to prescribe a measurable visibility standard.”
If implemented, the rules will compel AI developers and social platforms to integrate automated detection and labelling mechanisms into their systems, tagging AI-generated material at the point of creation.
This regulatory development positions India as a global leader in responsible AI governance, balancing innovation with ethical oversight.
India: A Growing Hub for AI Companies
India’s push for stronger AI regulation comes as the country becomes a key growth market for global AI firms. OpenAI CEO Sam Altman revealed in February that India is OpenAI’s second-largest market by number of users, with usage tripling in the past year.
As AI adoption accelerates across education, business, and entertainment sectors, the Indian government is prioritizing digital ethics, user safety, and accountability to ensure that technology serves the public good.
Once finalized, these AI content labelling rules could reshape how digital media is created, shared, and consumed in India. They promise to make AI-generated content instantly recognizable, limit misinformation during elections and public discourse, and encourage responsible AI innovation.
By introducing quantifiable labelling requirements and metadata traceability, India is signaling a new era of transparent and accountable AI governance, positioning itself alongside global leaders in AI regulation and digital safety.




