Trump Administration Defends Anthropic Blacklisting in Court
The legal battle between Anthropic and the US government is heating up, and the Trump administration's defense of the Anthropic blacklisting reveals just how high the stakes are. In court filings ahead of a March 24 hearing in San Francisco, government attorneys laid out their case for why designating Anthropic a supply-chain risk wasn't only justified but necessary for national security.
The administration's argument centers on a simple premise: the Department of Defense "isn't required to tolerate the risk that critical military systems will be jeopardized at key moments for national defense and active military operations." In other words, the government shouldn't have to worry about whether a private company might interfere with its AI tools during a conflict.
The Government's Legal Arguments
The Trump administration's legal team has made several interconnected arguments to defend the supply-chain risk designation. Their filings paint Anthropic as a company that prioritizes its own values over national security needs and could potentially disrupt military operations if it disagreed with how its technology was being used.
Key elements of the government's defense:
**National security authority** — the government has broad discretion to manage its supply chain and can designate any entity as a risk
- **Contractual precedent** — Anthropic's insistence on use restrictions represents an unacceptable constraint on military operations
- **Risk management** — even the possibility of interference justifies precautionary action
- **No constitutional violation** — the designation is a procurement decision, not a speech restriction
- **Executive authority** — the President has the power to direct federal agencies to cease use of any vendor's products
The First Amendment Question
Anthropic's central legal claim is that the supply-chain risk designation violates its First Amendment rights by punishing the company for its public statements about AI safety and its contractual positions on military use. The government rejects this framing entirely.
Government attorneys argue that the designation is based on security concerns, not speech. They contend that the government is simply choosing not to do business with a vendor whose terms it finds unacceptable — a routine procurement decision that doesn't implicate constitutional rights. In their view, Anthropic doesn't have a constitutional right to government contracts.
This is a important legal distinction. If the court accepts the government's framing, Anthropic's case weakens significantly. If the court sees the designation as retaliation for Anthropic's public advocacy and contractual positions, the government's position becomes much harder to defend.
The Business Impact Evidence
Anthropic has submitted extensive evidence of the designation's business impact. CFO Krishna Rao disclosed that Anthropic's all-time sales exceed $5 billion but the company has spent over $10 billion training and deploying its models. Hundreds of millions in Pentagon-related revenue are immediately at risk, and the broader commercial impact could reach billions.
Chief commercial officer Paul Smith provided specific examples: a $15 million deal paused, $80 million in deals at risk, a grocery chain canceling meetings. The government's response to this evidence has been essentially: that's not our problem. The business impact of a legitimate security decision is the company's concern, not the government's.
What the Court Will Decide
The March 24 hearing in San Francisco federal court could produce a temporary restraining order that reverses the designation while the case proceeds. Judge's decisions on TROs typically focus on four factors: likelihood of success on the merits, irreparable harm, balance of hardships, and public interest.
Anthropic has strong arguments on irreparable harm (billions in potential losses) and arguably on public interest (the chilling effect on AI safety commitments). The government has strong arguments on executive authority and national security discretion. The likelihood of success is genuinely uncertain.
Whatever the court decides, this case will set a precedent for how governments interact with AI companies. It's a test case for AI governance that goes far beyond one company and one contract.
Related reading: Pentagon Blacklists Anthropic's Claude — The Full Story · Claude Code and the Future of AI-Assisted Development · The Anthropic Blacklisting — What It Means for AI Regulation