New India IT Rules Require Social Platforms to Remove Illegal Content in Three Hours
India introduces strict new social media rules requiring platforms to remove unlawful content within three hours and label AI-generated content. The move raises concerns about online censorship, deepfake regulation, and digital freedom.
India has issued strict new social media rules requiring corporations to erase illegal content within three hours of being notified. Previously, platforms had 36 hours to remove such content. The new restriction will take effect on February 20 and will apply to big platforms such as Meta, YouTube, and X. The amended guidelines will also address AI-generated content, such as deepfakes.
The government has not clarified why the deadline was cut from 36 hours to three hours. However, this move has sparked intense discussion across the technology industry. Digital rights groups anticipate the shorter timescale will result in online censorship and reduced freedom of expression in India, which has over a billion internet users.
Under existing IT regulations, the Indian government has the authority to request the removal of content pertaining to national security and public order. According to reports, approximately 28,000 web links were blocked in 2024 in response to government requests. With the new rule, platforms must respond considerably more quickly when they get such notices.
A key highlight of the regulations is the new law governing AI-generated material. For the first time, the law expressly specifies AI-created material, which includes artificial audio and video that appears to be real, also known as deepfakes. Platforms must now properly identify such content. They are also needed to have permanent markings that aid in the traceability of the material. These labels cannot be removed once applied.
Companies must also utilize automated technologies to identify unauthorized AI content. This includes fake documents, impersonation, child abuse material, misleading videos, and explosives-related information. Experts believe this will boost the usage of automation in content moderation.
However, critics argue that the three-hour constraint may force platforms to rely primarily on automated systems that lack sufficient human review. Some experts fear that this may lead to excessive content removal, even if it is not unlawful. According to technology analysts, meeting such a short deadline could be very difficult.
While the AI labelling rule is seen as a positive step for transparency, questions remain about how effectively it can be implemented. As the new India social media law takes effect, both tech companies and users will closely watch how it impacts online freedom and digital safety.
This article is based on information from BBC