For the first time, Defense Department CTO Emil Michael tells why Anthropic’s Claude AI models have been designated as ‘nation security risk’ for America


For the first time, Defense Department CTO Emil Michael tells why Anthropic’s Claude AI models have been designated as 'nation security risk' for America

Defense Department’s chief technology officer Emil Michael has outlined for the first tine why Anthropic’s Claude AI models have been designated as a ‘national security risk’. Speaking on CNBC’s Squawk Box, Michael said Claude’s ‘different policy preferences’ embedded in its constitution are training and that could ‘pollute’ the Pentagon’s supply chain. “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael explained.Michael further added that the decision to label the AI giant as ‘national security risk’ was not done in order to punish Anthropic. noting that the company has a “huge commercial business” and only a small fraction of its revenue comes from U.S. government contracts. He also dismissed claims that the Pentagon has actively discouraged companies from using Anthropic outside of defense supply chains, calling such reports “rumours.”

Anthropic is the first American company be called as supply chain risk

Anthropic was recently tabled as a supply chain risk, a designation reserved only for foreign adversaries. The move requires defense contractors and vendors to certify they are not using Claude in Pentagon-related work. Responding to this, Anthropic sued the Trump administration, calling the designation “unprecedented and unlawful” and warning that hundreds of millions of dollars in contracts were at risk.

Why Anthropic has sued the government

The Pentagon wanted Anthropic to remove hard limits on deploying its AI for fully autonomous weapons and domestic surveillance of American citizens. Anthropic denied saying that current AI models are not reliable enough for autonomous weapons and that using them in that way would be dangerous. It also called domestic surveillance a violation of fundamental rights.When those negotiations broke down, Defense Secretary Pete Hegseth formally designated Anthropic a national security supply-chain risk. President Donald Trump then directed the government to stop working with Anthropic altogether, with a six-month phase-out announced for existing contracts.The Defense Department has been equally firm, saying that US law – not a private company – should determine how America defends itself, and that the military needs full flexibility to use AI for “any lawful use.” The Pentagon warned that Anthropic’s self-imposed restrictions could endanger American lives.

Transition plan in place at Pentagon

Michael acknowledged that the Pentagon cannot “just rip out” Anthropic’s technology overnight, emphasising that a transition plan is in place. “This is not just Outlook where you could delete it from your desktop,” he said, underscoring the complexity of replacing AI systems integrated into defense operations.



Source link

  • Related Posts

    2 Indians killed, 10 injured in Iranian drone attack in Oman, says MEA

    NEW DELHI: Two Indian nationals were killed and 10 others injured in an attack in Sohar city in Oman, the Ministry of External Affairs said on Friday.Speaking about the incident,…

    Irfan Pathan drops big statement on MS Dhoni’s IPL future: ‘CSK is incomplete without him | Cricket News

    MS Dhoni (PTI Photo/R Senthilkumar) Former India all-rounder Irfan Pathan feels the upcoming edition of the Indian Premier League could potentially be the final season in which legendary wicketkeeper-batter MS…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी