Defense Secretary Pete Hegseth has sparked a firestorm within the tech industry after issuing a high-stakes ultimatum to Anthropic, the San Francisco-based AI firm. In a Tuesday meeting at the Pentagon, Hegseth demanded that the company provide the military with “unfettered” access to its flagship AI model, Claude, or face designation as a “supply chain risk”—a blacklisting typically reserved for foreign adversaries like Huawei.
The deadline for compliance expired at 5:01 p.m. ET on Friday, with Anthropic CEO Dario Amodei publicly stating the company “cannot in good conscience accede” to demands that would bypass safety guardrails against autonomous lethality and domestic surveillance.
The ‘Incoherent’ Ultimatum
The clash centers on Hegseth’s insistence that Anthropic remove internal restrictions that prevent its AI from being used for fully autonomous weapons and mass surveillance of American citizens. To compel compliance, the Pentagon has threatened to:
- Invoke the Defense Production Act (DPA): A Cold War-era law used to force private companies to prioritize government needs.
- Label Anthropic a “Supply Chain Risk”: A move that would effectively bar any company doing business with the Department of Defense (DoD) from using Anthropic’s technology.
Dean Ball, a former AI adviser in the Trump administration who helped craft the President’s AI Action Plan, labeled the dual-threat “incoherent” and “a whole different level of insane.”
“You’re telling everyone else who supplies to the DoD you cannot use Anthropic’s models, while also saying that the DoD must use Anthropic’s models,” Ball told Politico. He warned that such “corporate murder” could devastate the American AI investment landscape.
A ‘Democratic Island’ in a Republican Sea
Anthropic, which holds a $200 million contract with the Pentagon, has long positioned itself as a “safety-first” alternative to competitors like OpenAI and Elon Musk’s xAI. While xAI’s “Grok” has reportedly agreed to “all lawful use” terms, Amodei has maintained strict “red lines.”
The friction intensified following a January operation in Venezuela that led to the capture of President Nicolás Maduro. Reports surfaced that the military utilized Claude via the Palantir data platform during the mission. Sources suggest Hegseth was incensed by internal questions raised at Anthropic regarding how the model was applied during the raid, viewing the company’s ethical oversight as an interference in military command.
Legal and Strategic Red Flags
Legal experts have questioned the logic of the administration’s aggressive posture. Katie Sweeten, a technology lawyer and former Department of Justice official, described the threat as inherently contradictory.
“I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk,” Sweeten said. She noted that the move appears more “punitive” than strategic, potentially poisoning future partnerships between Silicon Valley and the Pentagon.
What’s Next: A Six-Month Phase-Out?
Following the expiration of the Friday deadline, President Donald Trump reportedly ordered federal agencies to “immediately cease” the use of Anthropic technology. However, a six-month “phase out” period has been proposed for the Pentagon, given how deeply Claude is currently integrated into classified networks.
Hegseth, who has frequently criticized “woke culture” in the military, signaled that the Department of War will transition to “more patriotic” services. For Anthropic, the loss of the contract is only the beginning; the “supply chain risk” label could force thousands of enterprise customers to choose between their AI infrastructure and their government partnerships.