Philadelphia Independent - Microsoft urges Pentagon pause blacklisting Anthropic

Pennsylvania -

IN THE NEWS

Microsoft urges Pentagon pause blacklisting Anthropic
Microsoft urges Pentagon pause blacklisting Anthropic / Photo: RONNY HARTMANN - AFP/File

Microsoft urges Pentagon pause blacklisting Anthropic

Microsoft on Tuesday warned a judge that the Pentagon blacklisting of Anthropic could hamper US warfighters and imperil the country's drive to lead in artificial intelligence.

Text size:

In a brief, Microsoft backed Anthropic's request for an order stopping the Pentagon from implementing its ban on the use of Anthropic AI until the matter is settled in court.

Anthropic filed suit this week against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.

In the complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.

Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei.

The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department.

- AI overhaul -

Microsoft argued in an amicus brief that blacklisting Anthropic was an unprecendented response to a contract dispute that portended ill for the technology sector as well as the US military.

"This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft said in the brief.

A temporary restraining order would allow time to avoid disrupting the American military's ongoing use of advanced AI, Microsoft argued.

"Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by Department of War."

"This could potentially hamper US warfighters at a critical point in time."

The row erupted days before the US military strike on Iran.

Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems.

Anthropic had infuriated Pentagon chief Pete Hegseth by insisting the technology should not be used for mass surveillance or fully autonomous weapons systems.

President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology.

"AI should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," Microsoft said in the filing.

More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday.

In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible."

"Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.

C.Turner--PI