AI rule

AI, Deepfakes, and Your Safety: Understanding India’s New AI Rules

In this article I talk about the key features of the new AI rules created by government of India to combat deefakes and its benefits to common people

In today’s world, it’s easy to create videos, audio, and images with the help of Artificial Intelligence (AI). While it has some good uses (like in movies, advertisements, animation, etc.), it is mostly being misused (for deepfakes, cybercrimes, election fraud, damaging reputations, etc.). I have written many articles about deepfakes, which you can read by visiting my bilingual blog articles.

Taking this danger seriously, the Government of India has introduced strict new draft rules for the internet’s giant companies like Google, Facebook, and WhatsApp (‘intermediaries’) and for companies that develop AI technology. These have been introduced on this past October 22 as amendments to the “Intermediary Guidelines and Digital Media Ethics Code” that came into force in 2021. If you have anything to say about this new draft rule, you can send it in Microsoft word or pdf format to the email address itrules.consultation@meity.gov.in before the upcoming November 6.

Key features of the new AI rules :-

  • A New Legal Name for ‘Deepfake’ Content: Content created by ‘Deepfake’ or ‘AI’ gets a new legal name: ‘Synthetically Generated Information’ or ‘Synthetic content’.
  • Mandatory Labeling: Any content created using AI (photo, video, audio) that is shown to users must be clearly and easily visibly ‘labeled’ as “Created by AI” or “Video modified by AI”.
  • Clear Rules for Users: All intermediary companies must clearly inform their users—”You cannot post any deepfakes, obscene, violent, or any content that breaks the country’s laws on our platform”.
  • 24-Hour Takedown Rule: If someone files a complaint about a fake, obscene, or defamatory deepfake video or photo of themselves, the company (Facebook, Instagram, etc.) must completely remove that content within 24 hours of receiving the complaint.
  • Permission for ‘Under-Trial’ AI: No company can release its new AI model (e.g., ChatGPT-5 or Gemini 2.0) to the public in an ‘under-trial’ or ‘unreliable’ stage without permission from the Central Government.
  • Watermarks for Traceability: Every photo, video, or audio created using AI must include a ‘digital signature’ or ‘watermark’ that makes it possible to trace who created it, when, and how.
  • Loss of ‘Safe Harbour’: If companies like Google and Facebook fail to follow any of these rules, their ‘safe harbour’ protection under Indian law will be revoked, and the company itself can be made an accused and dragged to court.

How these AI rules will affect common people :-

  • Increased Safety: From now on, the ‘AI label’ will help you know if a video or photo you see is real or fake. This will reduce the chances of falling for scams or believing fake news.
  • Protection for Women and Children: Women are the biggest victims of deepfake technology. There were many cases of their photos being modified into obscene content and circulated. Thanks to the new 24-hour ‘delete’ rule, victims can get such content removed immediately.
  • Reliability of Real News: During elections, there was a risk of creating fake speeches of leaders to incite riots. Now, due to labeling and watermarking, the spread of such fake news will be curbed.
  • Accountable Companies: Companies like Facebook and WhatsApp, which used to sit back saying “it’s not our problem,” must now take your complaints seriously. If they don’t, they themselves will land in legal trouble.

Finally, this is not a fight against technology, but rather a fight to ensure technology is used ‘responsibly’ for the good of humanity. In this fight, the government, companies, and the common people must all join hands.

Leave a Reply

Your email address will not be published. Required fields are marked *