Pentagon labels Anthropic ‘national security risk’, bars AI firm from defence work


Daijiworld Media Network - Washington

Washington, Mar 6: The United States Department of Defense has designated artificial intelligence company Anthropic as a “supply-chain risk to national security”, effectively banning it from conducting defence-related business with the US military.

According to the company, the designation was issued on Wednesday and requires the Pentagon and its contractors to stop using Anthropic’s AI services for all defence purposes.

US Defence Secretary Pete Hegseth signalled the move earlier on social media, following months of tense negotiations between the Pentagon and the AI firm over how the military should use Anthropic’s generative AI system, Claude AI.

Anthropic CEO Dario Amodei confirmed the decision in a statement, saying the company disagrees with the designation and intends to challenge it legally.

“We do not believe this action is legally sound, and we see no choice but to challenge it in court,” Amodei said, while adding that Anthropic shares the US government’s goal of strengthening national security through responsible AI deployment.

The conflict reportedly centres on the Pentagon’s demand that AI systems be available for “any lawful use,” including military operations. Anthropic had sought stronger safeguards preventing its technology from being used for lethal autonomous weapons or large-scale domestic surveillance.

Despite being a relatively new technology, generative AI models have been rapidly adopted by the administration of Donald Trump, including for potential defence applications.

Until recently, Anthropic had been the only AI provider authorised to operate on the Pentagon’s classified networks.

Following the dispute, competing firms have moved quickly to secure defence deals. OpenAI, led by CEO Sam Altman, announced a new agreement with the Pentagon allowing its AI services to be used in classified environments.

Meanwhile, xAI, founded by Elon Musk, also reached a deal enabling its Grok AI system to be deployed on classified military networks.

Hegseth said Anthropic would be allowed to continue providing services for up to six months to allow a transition to alternative systems.

The “supply-chain risk” designation is typically applied to companies linked to foreign adversaries, making its use against an American firm highly unusual.

Technology advocacy groups and industry leaders have warned that the move could have broader implications for the US AI sector. A coalition including companies such as Nvidia and Apple reportedly urged the Pentagon to avoid imposing the label.

Experts say the action could discourage technology companies from working with the federal government and create uncertainty for investors in the rapidly growing AI industry.

Analysts also pointed out that the designation has not been applied to Chinese AI firms such as DeepSeek, prompting criticism that the decision could weaken America’s position in the global AI race.

 

 

  

Top Stories


Leave a Comment

Title: Pentagon labels Anthropic ‘national security risk’, bars AI firm from defence work



You have 2000 characters left.

Disclaimer:

Please write your correct name and email address. Kindly do not post any personal, abusive, defamatory, infringing, obscene, indecent, discriminatory or unlawful or similar comments. Daijiworld.com will not be responsible for any defamatory message posted under this article.

Please note that sending false messages to insult, defame, intimidate, mislead or deceive people or to intentionally cause public disorder is punishable under law. It is obligatory on Daijiworld to provide the IP address and other details of senders of such comments, to the authority concerned upon request.

Hence, sending offensive comments using daijiworld will be purely at your own risk, and in no way will Daijiworld.com be held responsible.