The Indian government has drastically tightened its digital regulations by mandating that social media platforms remove deepfakes and objectionable content within a three-hour window.
This new directive, led by the Ministry of Electronics and Information Technology (MeitY), significantly reduces the previous 36-hour grace period and introduces a mandatory requirement for platforms to label AI-generated content to prevent the spread of misinformation.
The move comes as a direct response to the escalating threat of deepfakes, highly realistic digital media created using Artificial Intelligence to replace a person’s likeness, which have increasingly targeted high-profile individuals and ordinary citizens across India.
Ashwini Vaishnaw, the Union Minister for Electronics and IT, emphasized that the speed of digital dissemination necessitates a much faster response from intermediaries.
He noted that the first few hours are critical in preventing a piece of viral misinformation from causing widespread social or individual harm.
Under the updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, platforms must now deploy automated tools to identify and proactively label content that has been manipulated by AI.
Labelling involves adding a visible watermark or text tag that informs the viewer that the media they are consuming is synthetic and not a real recording.
The new rules also target non-consensual sexual content and deepfakes that could compromise a person’s dignity or privacy.
For aspiring creators, this means the safety net against identity theft is becoming more robust, though it also places a higher burden of transparency on those using AI tools to enhance their videos or images.
Failure to comply with these rapid takedown orders could result in platforms losing their safe harbour protection.
Safe harbour is a legal provision that shields social media companies from being held personally liable for the content posted by their users; losing this status would open these companies to direct prosecution for illegal uploads.
MeitY has been increasingly vocal about the accountability of Big Tech companies like Meta, Google, and X in the Indian market.
The ministry previously issued several advisories regarding deepfakes in late 2023 following a series of viral incidents involving prominent Bollywood actors and political figures.
India is currently one of the largest markets for these global tech giants, with hundreds of millions of active users across various platforms.
The government has signalled that these rules are just the beginning of a broader legislative effort to regulate the creator economy and ensure that digital growth does not come at the expense of user safety or democratic integrity.