Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns


Government’s new IT rules make AI content labelling mandatory; give Google, YouTube, Instagram and other platforms 3 hours for takedowns
New government rules now mandate clear labelling for all AI-generated content, including deepfakes and synthetic audio, starting February 20. Social media platforms must verify user declarations on AI content and embed traceable metadata. Takedown timelines have been drastically reduced to as little as three hours for certain violations, with platforms also required to warn users about penalties.

The Central government has notified amendments to the IT intermediary rules that formally bring AI-generated content—including deepfake videos, synthetic audio and algorithmically altered visuals—under a structured regulatory framework for the first time.The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into effect from February 20. Under the updated framework, social media platforms and digital intermediaries must ensure all synthetically generated information (SGI) is clearly and prominently labelled so users can immediately tell it apart from real content.Platforms are also required to embed persistent metadata and unique identifiers to make such content traceable back to the source. Crucially, intermediaries cannot allow the removal or suppression of these labels or metadata once applied.

Bigger platforms, bigger responsibilities

Significant social media intermediaries—think Instagram, YouTube, Facebook—face tighter obligations. Before any upload, they must obtain a user declaration on whether the content is synthetically generated and deploy automated tools to verify those claims. If content is flagged as AI-made, it must carry a visible disclosure before it goes live.Notably, the government dropped an earlier proposal from the October 2025 draft that would have required visible watermarks covering at least 10% of screen space on AI-generated visuals. Industry groups, including IAMAI, had pushed back hard, calling the rule too rigid and technically impractical across formats. The final version still requires clear labelling—just not a fixed-size watermark plastered across the content.The amendments also sharply compress takedown timelines. In specific cases, platforms now have just three hours to act on lawful orders, down from 36 hours earlier. Other response windows have been cut from 15 days to seven and from 24 hours to 12.The rules also mandate that platforms warn users at least once every three months about penalties for violating the new provisions, including misuse of AI-generated content.



Source link

  • Related Posts

    Happy Promise Day 2026: Top 50 wishes, messages and quotes for your special someone

    Promise Day has never tried to steal the spotlight, and that is probably why people like it so much. It does not arrive with loud expectations or dramatic build-up. There…

    Trump’s zero-tariff clause for Bangladesh: Will India’s textile exports lose competitive edge gained after US trade deal?

    The possibility of zero-tariff entry for select Bangladeshi textile and apparel exports has moderated the earlier optimism surrounding Indian exporters. (AI image) The US-Bangladesh trade deal announced on Tuesday may…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    en_USEnglish