Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns


Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns
New government rules now mandate clear labelling for all AI-generated content, including deepfakes and synthetic audio, starting February 20. Social media platforms must verify user declarations on AI content and embed traceable metadata. Takedown timelines have been drastically reduced to as little as three hours for certain violations, with platforms also required to warn users about penalties.

The Central government has notified amendments to the IT intermediary rules that formally bring AI-generated content—including deepfake videos, synthetic audio and algorithmically altered visuals—under a structured regulatory framework for the first time.The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into effect from February 20. Under the updated framework, social media platforms and digital intermediaries must ensure all synthetically generated information (SGI) is clearly and prominently labelled so users can immediately tell it apart from real content.Platforms are also required to embed persistent metadata and unique identifiers to make such content traceable back to the source. Crucially, intermediaries cannot allow the removal or suppression of these labels or metadata once applied.

Bigger platforms, bigger responsibilities

Significant social media intermediaries—think Instagram, YouTube, Facebook—face tighter obligations. Before any upload, they must obtain a user declaration on whether the content is synthetically generated and deploy automated tools to verify those claims. If content is flagged as AI-made, it must carry a visible disclosure before it goes live.Notably, the government dropped an earlier proposal from the October 2025 draft that would have required visible watermarks covering at least 10% of screen space on AI-generated visuals. Industry groups, including IAMAI, had pushed back hard, calling the rule too rigid and technically impractical across formats. The final version still requires clear labelling—just not a fixed-size watermark plastered across the content.The amendments also sharply compress takedown timelines. In specific cases, platforms now have just three hours to act on lawful orders, down from 36 hours earlier. Other response windows have been cut from 15 days to seven and from 24 hours to 12.The rules also mandate that platforms warn users at least once every three months about penalties for violating the new provisions, including misuse of AI-generated content.



Source link

  • Related Posts

    $13 billion that seems to ‘scream’ Apple’s AI strategy is not wrong

    Tech stocks have been facing torrid days in the last few days. As Big Tech’s mega AI spending plans are spooking investors. So much so that Microsoft, Nvidia, Oracle, Meta,…

    ‘Going to Colombo will be a challenge’: India’s assistant coach Ryan ten Doeschate on why Pakistan have the edge | Cricket News

    India vs Pakistan (Photo by Getty Images) NEW DELHI: India’s assistant coach Ryan ten Doeschate said on Tuesday the team has always been mentally ready to face Pakistan in their…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी