Comments on the Draft IT Rules’ amendment (2025)

Published on Tue Dec 30 2025Kaustubha Kalidindi - DAU

Project:

In October this year, the Ministry of Electronics and Information Technology (MEITY) had published draft amended rules to cover the scope of harms emanating from AI generated/synthetically generated information. The rules in their current iteration govern obligations of intermediaries with respect to content hosted on their platform, and in the proposed amendment, additional obligations are to be incurred by intermediaries for synthetic content. In the background note, it is stated that the aim of the proposed amendment is to curb harms such as deepfakes and to prevent their weaponization to ‘spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud’.

In sum, the proposed rules require synthetically generated content to be labelled. The label must cover 10% of the content, and intermediaries are required to ensure any synthetic content hosted on their platform contains such a label. We submitted a response to the call for comments on the proposed amendment. Our key concerns were with respect to the following:

1. Broad Definition of synthetic content The definition of ‘synthetically generated information’ in the proposed rules is broad, and current interpretation would include any modification made, including simple editing tools such as increasing/decreasing contrasts or brightness on an image, or adding filters to images posted on social media, since they fall under the ambit of the word ‘artificiality’. It could also include use cases such as text edited using an AI powered spell-checker, and remixes or modified samples of existing music using software tools. The scope of the proposed definition would result in most content online needing to be labelled and verified including content that falls below the threshold of what would be considered harmful.

2. The Challenge with Labels The proposed rules require any computer resources which enable the creation of synthetically generated information to label or permanently embed such content, covering at least 10% of the content area. The rules state that the media item must be embedded with metadata or labels that are visible or audible. The labels, as per the proposed amendments, are supposed to serve both a technical and social function. While the technology to establish provenance with metadata is evolving through initiatives like C2PA, the adoption is slow. Furthermore, watermarks are easily removed. The technology for detection is also not foolproof- image and video models only support detection of manipulation of facial attributes 1. Models encounter challenges in detecting manipulations which are of low quality or where they consist of real-world images previously unseen 2. Audio controlled models failed to perform well on real samples 3. With these limitations in detection and labeling of content, the guidelines stipulate responsibilities that cannot be met by significant social media intermediaries on which this content is hosted.

3. Addressing Harms from AI generated Content When examining the harms from AI generated content, we have found that these are addressed in existing regulations 4. For example, AI generated nudes can be actioned under the Information Technology Act, and the Bharatiya Nyay Sanhita, through provisions on impersonation, sexual harassment, and transmission of explicit content 5. Use of AI for creating misleading content during elections can be addressed under the code of conduct released by ECI 6. The RBI has issued directions to strengthen the digital lending ecosystem and protect against incidents such as loan app scams 7. The biggest challenge is in the enforcement of existing laws. Bad actors, seeking to do harm, are wilful defaulters of existing laws and will also not comply with any stipulations for labeling. Instead of adding more regulations, strengthening the enforcement of existing regulations will address harms from AI generated content. Some of the concerns faced in enforcement are:

  1. An absence of clear protocols at police stations to handle digital evidence in a manner that maintains its integrity and respects the privacy and dignity of the survivor.
  2. Delays at the Forensic Science Library (FSL) which hinder investigations, and difficulties in translating report conclusions to judicial authorities.
  3. Challenges in validating digital evidence in court and explaining chain of custody.

Our Key Recommendations

  1. The current legal framework is sufficient in tackling harms pertaining to synthetic content, it is recommended that regulation instead be focused on expanding the applicability of existing laws to such content.
  2. Outside of the narrow category of AI generated videos and audio, a definition of AI modified content will inevitably rely on arbitrary thresholds. Any prescriptions for labeling on provenance must only be stipulated for the narrow category of content that is entirely AI generated. This however, should come with the understanding that given the state of technology today, bad actors will easily remove such identifiers.
  3. Community guidelines must be legally compliant, and transparency is vital for effective redressal of platform violations. A techno-legal approach to the issue of harms from AI generated content could also be stipulations for data sharing by SSMIs that enable independent evaluations of content addressed under existing categories of harm. This can help address violations that include synthetic content.
  4. Detection measures such as watermarking and labeling are reacting to a fast evolving generative AI landscape resulting in an endless cat and mouse chase 8. Instead, we must focus on the needs of survivors who have been detrimentally impacted by synthetic content. This requires capacity building and sensitisation of support networks, law enforcement and judiciary so that survivors feel confident in reporting incidents.

Footnotes

  1. https://tattle.co.in/blog/2025-03-12-deepfake-o-meter/

  2. Ibid.

  3. Ibid.

  4. https://tattle.co.in/make-it-real-report.pdf

  5. Ibid.

  6. https://www.pib.gov.in/PressReleseDetailm.aspx?PRID=2019760

  7. https://rbidocs.rbi.org.in/rdocs/notification/PDFs/36NT8C402BE7C2A349E0BFFF3C526668CD7A.PDF

  8. https://tattle.co.in/blog/mitigating-harms-of-synthetic-media/

Text and illustrations on the website is licensed under Creative Commons 4.0 License. The code is licensed under GPL. For data, please look at respective licenses.