Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns


Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns
New government rules now mandate clear labelling for all AI-generated content, including deepfakes and synthetic audio, starting February 20. Social media platforms must verify user declarations on AI content and embed traceable metadata. Takedown timelines have been drastically reduced to as little as three hours for certain violations, with platforms also required to warn users about penalties.

The Central government has notified amendments to the IT intermediary rules that formally bring AI-generated content—including deepfake videos, synthetic audio and algorithmically altered visuals—under a structured regulatory framework for the first time.The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into effect from February 20. Under the updated framework, social media platforms and digital intermediaries must ensure all synthetically generated information (SGI) is clearly and prominently labelled so users can immediately tell it apart from real content.Platforms are also required to embed persistent metadata and unique identifiers to make such content traceable back to the source. Crucially, intermediaries cannot allow the removal or suppression of these labels or metadata once applied.

Bigger platforms, bigger responsibilities

Significant social media intermediaries—think Instagram, YouTube, Facebook—face tighter obligations. Before any upload, they must obtain a user declaration on whether the content is synthetically generated and deploy automated tools to verify those claims. If content is flagged as AI-made, it must carry a visible disclosure before it goes live.Notably, the government dropped an earlier proposal from the October 2025 draft that would have required visible watermarks covering at least 10% of screen space on AI-generated visuals. Industry groups, including IAMAI, had pushed back hard, calling the rule too rigid and technically impractical across formats. The final version still requires clear labelling—just not a fixed-size watermark plastered across the content.The amendments also sharply compress takedown timelines. In specific cases, platforms now have just three hours to act on lawful orders, down from 36 hours earlier. Other response windows have been cut from 15 days to seven and from 24 hours to 12.The rules also mandate that platforms warn users at least once every three months about penalties for violating the new provisions, including misuse of AI-generated content.



Source link

  • Related Posts

    Still no India No. 1 Arjun Erigaisi as Norway Chess reveals new participant for 2026 event | Chess News

    Arjun Erigaisi (Photo by Norway Chess/Michal Walusza) NEW DELHI: Norway Chess has announced Alireza Firouzja as the latest player for its 2026 edition in Oslo, but India’s top-ranked player Arjun…

    T20 World Cup: World Record! Highest partnership for any wicket helps New Zealand chase down 174 inside 15.2 overs | Cricket News

    New Zealand vs UAE (AP Photo) NEW DELHI: New Zealand’s openers Tim Seifert and Finn Allen produced a stunning batting display to chase down UAE’s 174-run target with ease as…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी