supply chain risk designation

Anthropic Sues Pentagon Over Unprecedented AI Blacklist

A safety dispute over autonomous weapons has become a First Amendment battle

Anthropic filed two federal lawsuits on Monday against the Trump administration, challenging the Pentagon’s decision to label the San Francisco-based artificial intelligence company a national security threat. The suits, filed in California and the Washington D.C. appeals court, call the actions “unprecedented and unlawful” and ask the courts to reverse them entirely.

The stakes are considerable. Anthropic stands to lose hundreds of millions of dollars, and its position as the only AI frontier lab cleared for classified military networks. The case will force American courts to decide whether a government can weaponise national security law to override a domestic company’s published ethical commitments.

A Label Built for Foreign Adversaries

The supply chain risk designation has historically been reserved for foreign adversary contractors that could potentially sabotage American national security systems. Applying it to Anthropic is the first known instance of the federal government using the authority against a US company. The designation requires defence vendors and contractors to certify that they do not use Anthropic’s Claude models in their Pentagon work.

Also Read: OpenAI Drops Its Pentagon Surveillance Shield

Anthropic had signed a USD 200 million contract with the Department of Defence in July and was the first AI lab to deploy its technology across the agency’s classified networks. Renegotiation collapsed over two conditions Anthropic refused to drop: that Claude would not be used for fully autonomous weapons without human oversight, and that it would not be deployed for mass domestic surveillance of American citizens.

The Contract Clause That Broke the Deal

The Pentagon wanted to use Anthropic’s AI for “all lawful purposes,” insisting it could not allow a private company to dictate how it uses its tools in a national security emergency. Anthropic’s position was that these were not commercial red lines but ethical commitments baked into Claude’s design. CEO Dario Amodei said the company could not “in good conscience” agree to the Pentagon’s terms.

Consequently, Defence Secretary Pete Hegseth issued the supply chain risk designation, and President Trump directed all federal agencies to immediately cease using Anthropic products. Since the feud began, Pentagon officials have cleared Elon Musk’s xAI and OpenAI’s ChatGPT for use in classified systems. The timing of those clearances was not incidental.

Rivals Line Up on Both Sides

Dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic, arguing that the designation could harm US competitiveness and hamper public discussion about AI risks. Notably, their employers were simultaneously positioning themselves to absorb Anthropic’s displaced government contracts.

The Trump administration accused Anthropic of being a “radical left, woke company” and insisted the military would not be “held hostage by the ideological whims of any Big Tech leaders.” The framing reveals the administration’s preferred terrain: culture war, not contract law.

Also Read: OpenAI Launches Prism: A Free AI-Native Workspace to Accelerate Global Science

The Hinge Point

The supply chain risk designation was never really about national security logistics. It was the only available statutory instrument sharp enough to sever Anthropic’s government contracts without legislative action or a prolonged procurement dispute. The administration chose it precisely because its original purpose, blocking foreign adversaries, carries coercive weight that a simple contract termination would not. Anthropic’s lawsuit recognises this plainly: the company argues that the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. Therefore, what is before the courts is not a procurement disagreement. The question is whether the executive branch can redesignate a domestic company as a foreign-style threat to strip it of the legal protections that would otherwise apply. The answer to that question will define the terms under which every AI company operating on government networks must now decide how much of its ethics it is prepared to defend in writing.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top