Pentagon may designate Anthropic as ‘Supply Chain Risk’: What this means for the company, its customers and partners


Pentagon may designate Anthropic as 'Supply Chain Risk': What this means for the company, its customers and partners
Representative Image. In pic: US President Donald Trump and Defence Secretary Pete Hegseth

The US Department of Defence may soon designate the Claude developer Anthropic a “supply chain risk”. This classification would require anyone doing business with the military to cut ties with the AI company, a senior Pentagon official told Axios. Defence Secretary Pete Hegseth is reportedly nearing a decision to sever business ties with Anthropic.The designation is typically reserved as a penalty for foreign adversaries. “It will be an enormous pain in the a** to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the senior official added.Chief Pentagon spokesman Sean Parnell told Axios, “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”The potential move carries significant implications. Anthropic’s Claude is currently the only AI model available in the military’s classified systems and was reportedly used during the US Army’s January raid on Venezuelan ex-president Nicolas Maduro. Pentagon officials have praised Claude’s capabilities, making any disentanglement a complex undertaking for the military and its partners.

What Pentagon’s ‘Supply Chain Risk’ designation will mean for Anthropic, its partners and customers

Anthropic’s supply chain risk designation from the Pentagon would require the companies doing business with the US Department of Defence to certify that they do not use Claude in their workflows. Given that Anthropic recently said eight of the ten largest US companies use Claude, the impact could extend well beyond the military.The Pentagon contract under threat is valued at up to $200 million, a small portion of Anthropic’s $14 billion in annual revenue. However, a senior administration official noted that competing models “are just behind” when it comes to specialised government applications, which may even complicate any abrupt switch.The move also sets the tone for the Pentagon’s negotiations with OpenAI, Google, and xAI, all of which have agreed to remove safeguards for use in the military’s unclassified systems but are not yet used for more sensitive classified work. A senior administration official said the Pentagon is confident the three companies will agree to the “all lawful use” standard. However, a source familiar with those discussions said much remains undecided.

What made the Pentagon punish Anthropic with the ‘Supply Chain Risk’ designation

Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude. Anthropic is prepared to loosen its current terms of use but wants to ensure its tools are not used to conduct mass surveillance on Americans or to develop autonomous weapons with no human involvement.The Pentagon has argued that those conditions are unduly restrictive and would be unworkable in practice, insisting that Anthropic and three other AI companies, like OpenAI, Google, and xAI, allow military use of their tools for “all lawful purposes”. A source familiar with the situation said senior defence officials have been frustrated with Anthropic for some time and embraced the opportunity to make the dispute public.Privacy advocates have raised concerns on the other side, noting that existing mass-surveillance laws do not account for AI. The Pentagon already collects large amounts of personal data, from social media posts to concealed carry permits, and there are concerns that AI could significantly expand that authority to target civilians.Commenting on the situation, an Anthropic spokesperson said, “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right.” The spokesperson noted that Claude was the first AI model to be used on classified networks, reiterating the company’s commitment to applying frontier AI for national security.



Source link

  • Related Posts

    T20 World Cup Super 8 matches: Date, time, full schedule and upcoming matches for qualified teams | Cricket News

    NEW DELHI: Australia’s T20 World Cup campaign came to an abrupt end on Tuesday after rain washed out Zimbabwe’s Group B fixture against Ireland in Pallekele without a single ball…

    After raids in France and UK probe, Ireland privacy watchdog to investigate Elon Musk’s X

    The legal pressure on Elon Musk’s X, is mounting as Ireland’s Data Protection Commission (DPC) has launched a “large-scale” investigation into the social media company, focusing on how its AI…

    प्रातिक्रिया दे

    आपका ईमेल पता प्रकाशित नहीं किया जाएगा. आवश्यक फ़ील्ड चिह्नित हैं *

    hi_INहिन्दी