Anthropic sues the Pentagon over being declared a ‘supply chain risk’

0 0

Unlock the White House Watch newsletter for free

Anthropic has sued the Pentagon and other federal agencies over its designation as a “supply chain risk”, after the AI start-up insisted the US military accept curbs on the use of its technology.

In a filing on Monday, the company asked a federal court in California to declare the designation — usually reserved for Chinese and Russian vendors — “arbitrary” and “capricious”. It also asked a judge to block the Trump administration from implementing it.

The US government was “seeking to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our nation,” the company’s lawyers wrote.

“Anthropic’s reputation and core First Amendment freedoms are under attack,” they added.

The lawsuit marks an escalation in a weeks-long dispute between Anthropic and the Pentagon over the military use of its AI technology.

Defence officials sought sweeping rights to deploy the company’s models, while Anthropic insisted on guardrails it said were necessary to prevent misuse — a disagreement that ultimately collapsed negotiations and led to the start-up being declared a supply chain risk.

The designation, which was made formal last week, obliges companies to cut Anthropic out of their supply chains on military contracts. President Trump has also demanded that federal agencies stop using Anthropic.

But the start-up said “the vast majority” of its customers would be unaffected. Three of the company’s key partners — Amazon, Microsoft and Google — said they would retain ties to Anthropic outside of defence work.

In its claim, Anthropic cited a social media post by Donald Trump, in which he called it an “out-of-control, Radical Left AI company” and a post by defence secretary Pete Hegseth in which he accused Anthropic of “betrayal”.

The $380bn start-up refused to sign an open-ended contract with the Department of Defense, with chief executive Dario Amodei sticking to two “red lines” prohibiting the use of its technology for lethal autonomous weapons and domestic mass surveillance.

According to the filing, Hegseth “began demanding that Anthropic discard its usage restrictions altogether and replace them with a general policy under which the Department may make ‘all lawful use’ of the technology.”

Amodei said he could not “in good conscience” agree to those terms, triggering an explosive breakdown in negotiations between Anthropic and the Pentagon.

Anthropic’s Claude is currently the only AI model being used in classified operations, though rival group OpenAI struck a deal with the Pentagon late last month for its models to be used in the most sensitive missions.

The ChatGPT maker has also faced pushback from employees about the use of its technology for “all lawful purposes”.

Caitlin Kalinowski, who led OpenAI’s hardware team, announced her resignation from the company over the weekend, citing concerns about the use of AI for surveillance and lethal autonomous weapons.

The Pentagon said: “As a matter of Department of War policy, we do not comment on litigation.”

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy