Trump administration to cut Anthropic from government contracts and Pentagon supply chain

by dharm
February 28, 2026 · 1:38 AM
Trump administration to cut Anthropic from government contracts and Pentagon supply chain


Donald Trump has given Anthropic six months until it will be cut from US government contracts, saying the AI start-up made a “disastrous mistake” in challenging the Pentagon over the military use of its technology.

The president said he would not “ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!”

The Pentagon also designated Anthropic a supply-chain risk, an unprecedented action against an American company.

Anthropic chief executive Dario Amodei has faced off against defence secretary Pete Hegseth over how the Pentagon can use the company’s technology in military operations. Hegseth earlier this week said Anthropic had until Friday at 5pm local time to agree to his demands.

But the president said he would allow a “Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels”, potentially allowing time for the two sides to cut a deal.

Anthropic’s Claude is the only AI model deployed in classified operations, having been used in the seizure of Venezuelan leader Nicolás Maduro last month. An administration official told the FT that it remained the best model for military use.

But the company has refused to give the defence department open-ended permission to use its technology for all lawful purposes, citing concerns about its use in lethal autonomous weapons or for mass domestic surveillance.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution”, Trump posted on Truth Social on Friday.

Hegseth later on Friday said Anthropic “delivered a masterclass in arrogance and betrayal”. He said that effective immediately, no Pentagon contractors or suppliers “may conduct any commercial activity with Anthropic”.

Josh Gruenbaum, the US procurement chief who agreed a deal with Anthropic last August to supply government agencies, said it would be “dangerous to our nation” to maintain a business relationship with the company, but added: “We look forward to a prompt resolution”.

In cutting Anthropic from its supply chain, the Pentagon is implementing a process that up to now has only been used against foreign companies located in countries such as China and Russia.

Groups that have previously been deemed a supply chain risk include China’s Huawei and ZTE Corporation, and Russia’s Kaspersky Lab.

Negotiations between the Pentagon and Anthropic became increasingly fraught in the past week.

Amodei, was summoned to Washington for talks on Tuesday, which failed to produce a resolution. On Thursday, Amodei said his company “cannot in good conscience” agree to the US government’s terms.

Emil Michael, under-secretary of defence for research and engineering and a critical figure in negotiations, called Amodei a liar with a “God-complex”.

Michael struck a more conciliatory tone on Friday before the president’s comments. “I offer more talks, so long as they’re in good faith,” he said, adding that Anthropic had “cut off” talks despite the Pentagon making “a lot of concessions” to the AI lab.

The administration’s moves against Anthropic is a stark warning to AI groups, which also include OpenAI, Google and Elon Musk’s xAI, about how far the administration is willing to go to get companies to acquiesce to its demands.

The Pentagon “decided to make Dario a non-patriotic villain, to make an example and intimidate the other companies,” said a former senior defence official. “It’s deeply dangerous.”

Mark Warner, the top Democrat on the US Senate’s intelligence committee, said Trump’s move poses “an enormous risk to US defence readiness”.

The Pentagon is negotiating with other AI labs including OpenAI and xAI about using their models in classified operations.

OpenAI chief Sam Altman signalled that his company would also seek carve-outs preventing the use of its technology in domestic mass surveillance or lethal autonomous weapons in a note to staff on Thursday.

Anthropic signed a contract worth up to $200mn with the Pentagon last summer which accorded with its own acceptable-use policies, including red lines on lethal autonomous weapons and domestic surveillance.

Anthropic had been negotiating with Hegseth’s team over an updated contract for months, according to a person with knowledge of the matter.

Those negotiations became more heated after the defence department outlined a new strategy in January, including a demand that it be able to use AI models without usage policy constraints limiting lawful military applications, added the person.

Anthropic has argued that AI models are not yet reliable enough for humans to be removed from the “kill chain” and that existing surveillance laws are inadequate to prevent mass surveillance.

The Pentagon has disputed that, claiming that Anthropic had derailed the talks by pushing for excessive control over military operations.

The Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media,” chief spokesperson Sean Parnell wrote on X on Thursday. 

“We will not let ANY company dictate the terms regarding how we make operational decisions,” he added. 

Additional reporting by Joe Miller in Washington

⚠️ Disclaimer: All information provided on MyCabiz is published in good faith for general informational purposes only. MyCabiz does not make any warranties regarding the accuracy or completeness of the information and shall not be held liable for any losses arising from its use. Financial markets are subject to risk, and users are advised to consult a SEBI-registered financial advisor prior to making any investment decisions. Past performance is not a reliable indicator of future outcomes.

Suggested Topics: