The Central Government has issued new directives for social media platforms to regulate Artificial Intelligence (AI) generated content and prevent its misuse. These rules aim to tackle emerging risks posed by AI technologies and ensure that platforms act responsibly.
Social media companies must now implement automated systems and tools to detect and block content that is illegal, obscene, or misleading. This step ensures that harmful AI-generated material is removed before it reaches users, protecting online communities from misinformation and abuse.
In addition, the government has mandated clear labelling of AI content. All AI-generated posts, images, or videos must carry a visible tag indicating their origin. Platforms cannot allow these labels or their metadata to be removed or hidden. This measure helps users distinguish between authentic content and AI creations.
To promote awareness, social media platforms are required to educate users every three months. Notifications should explain the legal consequences of misusing AI content, including penalties and punishments. By keeping users informed, authorities hope to reduce the intentional or accidental spread of harmful material.
The government has also set strict removal deadlines for flagged content. If authorities or courts demand the removal of specific AI-generated posts, platforms must comply within three hours. This rapid response system is designed to limit the reach of harmful or misleading AI content.
These regulations reflect the government’s proactive approach to addressing the challenges of AI in digital media. By enforcing labelling, education, and rapid content removal, authorities aim to create a safer and more transparent online environment while promoting responsible use of artificial intelligence.
For the latest updates, click here.




