WASHINGTON — President Donald Trump issued a sweeping executive directive Friday, ordering all federal agencies to immediately cease the use of technology from AI powerhouse Anthropic. The move escalates a high-stakes standoff between the administration and Silicon Valley over the ethical boundaries of artificial intelligence in warfare.
The order follows a tense week of negotiations that saw Anthropic CEO Dario Amodei reject a Pentagon ultimatum to strip safety guardrails from “Claude,” the firm’s flagship AI model. In response, Defense Secretary Pete Hegseth officially designated Anthropic a “supply chain risk”—a blacklisting label typically reserved for foreign adversaries like Huawei.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” Trump wrote in a fiery post on Truth Social. “Their selfishness is putting AMERICAN LIVES at risk… We don’t need it, we don’t want it, and will not do business with them again!”
The ‘Red Lines’ of Modern Warfare
The conflict centers on two specific safety protocols Anthropic refuses to waive:
- Mass Domestic Surveillance: Restrictions preventing the use of Claude to monitor U.S. citizens.
- Lethal Autonomy: A prohibition against using the AI to power weapons systems that can kill without a human operator “in the loop.”
While the Pentagon initially agreed to these terms in a 2025 contract worth $200 million, Secretary Hegseth recently demanded “unfettered access” for all lawful purposes. Amodei remained defiant as the 5:01 p.m. Friday deadline passed, stating that the company “cannot in good conscience” allow its technology to be used for autonomous killing or mass surveillance.
A Unified Front in Silicon Valley
In an unprecedented display of industry solidarity, a coalition representing over 700,000 tech workers from Amazon, Google, Microsoft, and OpenAI signed an open letter urging their own employers to reject similar Pentagon demands.
Even OpenAI CEO Sam Altman, a primary competitor to Anthropic, sided with Amodei in an internal memo to staff. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” Altman wrote, while noting he is still attempting to broker a “de-escalation” with the Department of War.
National Security Implications
The fallout poses an immediate logistical challenge for the U.S. intelligence community. Claude was the first frontier AI model deployed on the government’s classified networks and is currently used by the CIA and NSA for critical data analysis.
The administration has granted a six-month “phase-out” period for agencies to transition away from Anthropic’s tools. However, experts warn that replacing Claude will not be instantaneous. While the administration has signaled a move toward Elon Musk’s Grok AI or potentially a restricted version of OpenAI’s models, some defense officials have privately questioned whether those systems can match Claude’s current integration and reliability.
What’s Next?
Anthropic has vowed to fight the “supply chain risk” designation in court, calling it “legally unsound.” The legal battle will likely test the limits of the Defense Production Act, which the administration has threatened to invoke to seize control of the software.
As the tech industry watches closely, the outcome of this feud will define the power dynamic between Washington and the AI labs that now hold the keys to 21st-century national security.