The Nuclear Option
The Pentagon is considering designating Anthropic — the San Francisco AI safety company behind Claude — as a supply chain risk to the U.S. Department of Defense.
Let that sink in. The supply chain risk designation under DFARS (Defense Federal Acquisition Regulation Supplement) is a tool designed for companies like Huawei, Kaspersky, and Chinese telecom firms. Foreign adversaries. Companies suspected of espionage or sabotage. It has never been applied to a major American technology company.
Defense Secretary Pete Hegseth is reportedly pushing the designation after months of contentious negotiations with Anthropic over the terms of Claude's military deployment. The message is clear: cooperate without conditions, or we'll treat you like a hostile foreign entity.
What Anthropic Wants (and Won't Budge On)
Anthropic has been negotiating a government contract for Claude Gov, a version of Claude tailored for classified and military environments. The contract is reportedly worth around $200 million over two years — a significant deal, but a fraction of Anthropic's estimated $14 billion in annual revenue.
The sticking point: Anthropic insists on two red lines:
- No mass surveillance of American citizens — Claude should not be used to build domestic surveillance systems targeting U.S. persons
- No fully autonomous lethal weapons — Claude should not make kill decisions without a human in the loop
These aren't exotic positions. Both align with existing U.S. law (the Fourth Amendment, various executive orders on autonomous weapons) and with the AI safety principles Anthropic was literally founded to advance.
But the Pentagon doesn't want conditions. It wants "all lawful purposes" — a blanket authorization that would let the military use Claude however it sees fit, with no AI-company-imposed guardrails.
Claude Already Went to War
Here's the twist: Claude has already been deployed in military operations. Through Anthropic's partnership with Palantir, Claude was integrated into the military's intelligence and operations platforms. It was reportedly used in the planning and execution of the operation to capture Venezuelan president Nicolás Maduro — one of the most high-profile U.S. military actions in recent memory.
So this isn't about whether Claude can be used militarily. It clearly can. It's about whether Anthropic gets to set any boundaries on how it's used going forward.
The Supply Chain Risk Threat
The supply chain risk designation would have cascading consequences far beyond military contracts:
- 8 of the 10 largest U.S. companies currently use Claude in their operations
- A designation would require any company doing business with the DoD to certify they don't use Claude or Claude-derived technology
- This would effectively force major enterprises to choose between their Pentagon contracts and their Anthropic contracts
- Given that defense spending touches nearly every Fortune 100 company, this could cripple Anthropic's commercial business
It's a pressure tactic with enormous leverage. The Pentagon isn't just threatening to cancel a $200M contract — it's threatening to make Claude radioactive across the entire defense-adjacent economy.
The Competition Is Willing
According to multiple reports, the Pentagon has been talking to Google, OpenAI, and xAI as potential replacements. All three are reportedly willing to provide military AI access without the guardrails Anthropic is insisting on.
This creates an uncomfortable dynamic: the company that cares most about AI safety may lose the ability to influence how AI is actually used in military contexts, while companies with fewer safety commitments step in.
It's the classic arms-race dilemma. If Anthropic walks away, someone else fills the gap — possibly with fewer safeguards.
Anthropic's Legal Argument
Anthropic's position isn't purely ethical — it's also legal. The company argues that:
- U.S. law already prohibits mass domestic surveillance without warrants (Fourth Amendment, FISA)
- AI capabilities are outpacing the statutes designed to constrain government surveillance
- Agreeing to "all lawful purposes" today could mean enabling capabilities that current law hasn't contemplated
- As Claude becomes more capable, the gap between what's technically possible and what's legally regulated widens
In other words: Anthropic is worried about being asked to do things that are technically legal today only because nobody imagined AI could do them when the laws were written.
Both Sides Have a Point
The Pentagon's argument is straightforward: the military needs AI tools to maintain national security. Adversaries like China are deploying AI without ethical handwringing. Letting an AI company dictate terms to the Department of Defense sets a dangerous precedent — private companies shouldn't have veto power over military capabilities.
Anthropic's argument is equally serious: AI companies have unique insight into what their models can do. Setting guardrails isn't about blocking the military — it's about ensuring AI deployment stays within constitutional and ethical bounds as capabilities scale exponentially. Once you agree to "all lawful purposes" with a model that improves every quarter, you've signed a blank check.
What Happens Next
The most likely outcome is a compromise — Anthropic agrees to expanded military usage with some guardrails, the Pentagon drops the supply chain risk threat. Both sides have too much to lose from escalation.
But if the designation goes through, it would represent a fundamental shift in the relationship between the U.S. government and its AI industry. It would signal that AI companies either serve the state's interests unconditionally or get treated as threats to national security.
For an industry that's been debating AI alignment for years, the question is suddenly concrete: aligned with whom?
The $200 million contract is a rounding error. The precedent is everything.