Unlock the White House Watch newsletter for free
Your guide to what Trump’s second term means for Washington, business and the world
Anthropic will reject a “final offer” to continue working with the US military, according to its chief Dario Amodei, setting up a legal fight between one of America’s leading AI labs and the Pentagon.
The maker of the Claude AI model has refused to allow the defence department free rein in the use of its technology, citing concerns over lethal autonomous weapons and for mass domestic surveillance.
US defence secretary Pete Hegseth on Tuesday summoned Amodei to Washington to demand that Anthropic permit any legal use of its model by Friday or face being cut from Pentagon supply chains or having its technology co-opted.
“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote in a blog on Thursday.
Anthropic was still hopeful for an agreement, he added. “Given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” Amodei said.
Pentagon officials wrote to Anthropic with a final offer on Wednesday, according to people with knowledge of the matter. But Amodei said those terms were unacceptable to his company, which bills itself as more responsible and focused on safety than its rivals.
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” Amodei wrote.
“We remain ready to continue our work to support the national security of the United States,” he added.
The Pentagon did not immediately respond to a request for comment.
The department has “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement”, the Pentagon’s chief spokesperson Sean Parnell wrote on X earlier on Thursday.
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he added.
If the Pentagon follows through on its ultimatum, Anthropic could be cut from the US defence supply chain and lose a $200mn contract the company signed with the department last year.
The commercial consequences for the $380bn company, which has partnered with Palantir and is the only AI lab whose models have been used in classified work for the Pentagon, could be profound.
A supply chain risk designation would also probably trigger a legal challenge from Anthropic, according to people with knowledge of the matter.
Companies that have previously been deemed a supply chain risk under US law, such as China’s Huawei, have historically been from adversary nations.
Alan Rozenshtein, associate professor of law at the University of Minnesota Law School and a writer on AI and the military, said the attack on Anthropic “is pretty far outside what the statute possibly constitutes. I suspect Anthropic has strong legal defences if it’s designated a supply chain risk.”
Hegseth has also threatened to invoke the Defense Production Act, a cold war-era measure enabling the president to control domestic industry in the national interest.
The DPA was used by presidents Joe Biden and Trump during the coronavirus pandemic to boost the manufacture of medical supplies and would allow the Pentagon to make use of Anthropic’s tools without a contractual agreement.