The Department of Defense’s War on Claude

Published on

,

By Arvind Salem ’29

The Department of War’s contract negotiations are normally a routine procedure. Given the fact that these contracts are often extremely lucrative for the company involved and allow the Department of War to operate at a level beyond what its internal capacity is capable of, there’s generally a smooth process for both sides when the company is sufficiently competent and the contract is sufficiently lucrative. 

Yet, even though both conditions are met with the Department of War and Anthropic, this contract negotiation resulted in the Department of War taking an unprecedented step of designating Anthropic, a U.S.-based company, a “supply chain risk” designation (while continuing to employ it in the operation in Iran). As a “supply chain risk,” defense vendors and contractors are required to certify that they don’t use Anthropic’s models in their work with the Pentagon. President Trump additionally ordered all federal agencies to stop using Claude, Anthropic’s AI mode.

Anthropic was designated as a supply chain risk because it refused to allow the Pentagon to use its AI system for autonomous weapons or to surveil American citizens, while the Pentagon wanted the freedom to use Anthropic’s products for all lawful uses without any other restrictions from Anthropic. Anthropic refused to concede this point, leading the Department of War to designate it as a supply chain risk.

A federal judge, Judge Rita Lin, recently issued an injunction against the Department of War’s order, which was challenged on a variety of constitutional and statutory grounds. The most obvious is that the Department of War’s designation seems targeted at punishing Anthropic for its stance on AI safety (rather than a true concern for national security), in violation of the company’s First Amendment right to freedom of speech. The Department argued that Anthropic was improperly inserting itself into the military chain of command by trying to overly police how its tool should be used, whereas such a determination should be left to the discretion of the military within the confines of law.

Judge Lin, in issuing the injunction, agreed more with Anthropic’s characterization of this incident as retaliation for their beliefs as she described these actions as going far beyond what the provision for supply chain risk was meant to guard against, writing that the statute for applying the supply chain risk designation does not support “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for exposing a disagreement with the government.” She then summarized the case more plainly as “classic First Amendment retaliation.”

However, just because Anthropic has received this injunction does not mean they have won on the merits yet; it just means that the “supply chain risk” designation has been paused. Some analysts reading the preliminary injunction believe it indicates that Anthropic will likely win on the merits, but the injunction itself is not official confirmation that it will do so.

Overall, the Trump administration’s attempt to rob a company of its ability to express its stance on AI safety carries deep implications for democracy and demonstrates the power of the contracting apparatus. In issuing the injunction, the First Amendment has temporarily prevailed.

Comments

Leave a comment