Can AI Companies Refuse to Work With the Military? Anthropic Says Yes
Anthropic is drawing a line in the silicon: AI companies can — and should — refuse to work with the military if it means compromising on safety principles. This isn't just a business position. It's a philosophical stance that challenges the assumption that technology companies exist to serve whatever customers will pay.
The question of whether tech companies can refuse military work isn't new. Google faced employee revolts over Project Maven in 2018, leading the company to drop the drone AI contract. Microsoft employees protested HoloLens military contracts. But Anthropic's situation is different — they didn't refuse to work with the military. They agreed to work with specific conditions. The military then demanded those conditions be removed, and Anthropic said no.
The Right to Say No
Anthropic's position rests on a simple principle: companies have the right to set terms for how their products are used. This is standard practice in virtually every industry. Pharmaceutical companies restrict how drugs can be prescribed. Chemical companies restrict how their products can be used. Software companies license their products with terms of service that limit usage.
AI is no different — or at least, Anthropic argues it shouldn't be. The company built Claude with specific safety properties and deployed it with specific use restrictions. The fact that one of its customers is the most powerful military on earth doesn't, in Anthropic's view, change the fundamental right to set terms.
Key elements of Anthropic's position:
**Contractual freedom** — companies can set terms for product use, including for government customers
- **Ethical obligations** — AI companies have a responsibility to prevent misuse of their technology
- **Safety architecture** — restrictions on lethal autonomous weapons and mass surveillance aren't arbitrary; they reflect genuine safety concerns
- **Precedent from other industries** — defense contractors routinely refuse specific work on ethical grounds
- **Democratic accountability** — if the government wants unrestricted AI, it should develop it in-house or through legislation
The Government's Counterargument
The Pentagon's position is that national security trumps corporate ethics. In a democracy, military decisions are made by elected officials and their appointed leaders, not by Silicon Valley CEOs. If the government determines that "all lawful use" of AI is appropriate for defense, a private company shouldn't be able to override that determination.
There's historical precedent for this view. During wartime, the government has compelled private companies to produce goods for military use. The Defense Production Act gives the President broad authority to direct industrial resources toward national defense. While the Act hasn't been invoked against Anthropic, its existence highlights the government's view that national security needs can override corporate preferences.
What Other Industries Do
The defense technology industry already has a framework for companies that refuse specific work. Many defense contractors have internal ethics boards that evaluate potential projects. Some refuse weapons work. Others refuse surveillance technology. The market accommodates these choices — companies that refuse certain work simply don't compete for those contracts.
What's unusual about the Anthropic case is the government's response. Rather than simply taking its business elsewhere (which it could easily do with OpenAI, Google, or xAI), the Pentagon chose to punish Anthropic through the supply-chain risk designation. This goes beyond "we won't do business with you" to "we'll make sure nobody does business with you."
The Stakes for Innovation
If the government can punish companies for setting ethical boundaries, the incentive is clear: don't set boundaries. This creates a race to the bottom where AI companies compete to be the most permissive, the most willing to remove restrictions, the most accommodating of government demands. That's the opposite of what AI safety advocates have been working toward.
Anthropic's stand isn't just about one company's principles. It's about whether the AI industry will have the freedom to develop responsible technology — or whether it will become an extension of the defense industrial complex. The answer matters for everyone who uses, builds, or is affected by AI. Which is everyone.
Related reading: OpenAI Plans to Double Workforce to 8,000 by Late 2026 · Encyclopedia Britannica Sues OpenAI Over Training Data Copyright · OpenAI Faces Lawsuit Over Mass Shooter's ChatGPT Conversations