Anthropic Denies It Could Sabotage AI Tools During War

The Trump administration's case against Anthropic just got a significant pushback. In a court filing on March 20, 2026, Anthropic's head of public sector Thiyagu Ramasamy directly addressed one of the Pentagon's most alarming claims: that Anthropic could sabotage its own AI tools during active military operations.

The government's argument, laid out in earlier filings, was that Anthropic could theoretically disable Claude, push harmful updates, or otherwise interfere with military systems if the company disapproved of how the Pentagon was using its technology. This was presented as a national security risk — the idea that a civilian tech company could hold military operations hostage.

Anthropic's Rebuttal

Ramasamy's response was unequivocal: "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations." He went further, explaining the technical architecture that makes sabotage impossible.

Key points from Anthropic's court filing:

**No kill switch** — "Anthropic doesn't maintain any back door or remote 'kill switch'"

  • **No direct access** — Anthropic personnel cannot log into DoD systems to modify or disable models
  • **Update controls** — model updates require approval from both the government and its cloud provider (Amazon Web Services)
  • **No surveillance capability** — Anthropic cannot access the data processed by Claude on government systems
  • **Architecture limitations** — once deployed, Claude operates within the government's infrastructure, not Anthropic's

The Technical Reality

Ramasamy's filing reflects what most AI engineers already know: deployed AI models don't work the way the government suggests. When Claude is running on government cloud infrastructure through AWS GovCloud, Anthropic doesn't have a direct line to modify or disable it. The model runs within the government's own security perimeter.

This is fundamentally different from, say, a cloud service where the provider can revoke access at any time. A deployed AI model is more like installed software — it runs locally (or on dedicated cloud infrastructure) and doesn't require continuous connection to the vendor's servers. The government's argument treats AI like a subscription service when it's actually more like a deployed application.

Why the Government's Argument Persists

Despite the technical implausibility, the Pentagon's sabotage argument serves a strategic purpose. It frames Anthropic as untrustworthy — a company that might put its own values ahead of national security. This narrative supports the broader case for why the government needs "all lawful use" authority without restrictions from private companies.

Defense Secretary Hegseth has been particularly aggressive in this framing. In his public statements, he characterized Anthropic's refusal to drop use restrictions as "a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives." The sabotage argument adds a concrete (if technically dubious) security dimension to what's fundamentally a political and philosophical disagreement.

The Bigger Picture

Anthropic's rebuttal exposes the weakness of the government's case. If the technical basis for the supply-chain risk designation is that Anthropic could sabotage military systems — and Anthropic can demonstrate that's architecturally impossible — then the designation starts to look less like a security measure and more like political retaliation.

This is exactly what Anthropic argues in its lawsuit. The company claims the designation violates its First Amendment rights by punishing it for protected speech — specifically, its public statements about AI safety and its contractual restrictions on military use. Whether the court agrees will have massive implications for the entire AI industry.

The sabotage claim was always more about narrative than technology. Anthropic's detailed rebuttal makes that clear. Now it's up to the courts to decide whether narrative or reality wins.


Related reading: Pentagon Blacklists Anthropic's Claude — The Full Story · Claude Code and the Future of AI-Assisted Development · The Anthropic Blacklisting — What It Means for AI Regulation