Anthropic’s $200 million contract with the Department of Defense is facing uncertainty following reports that the company raised questions about how its Claude AI model was used during the January raid targeting Nicolas Maduro.
“The Department of War’s relationship with Anthropic is being reviewed,” Pentagon Chief Spokesman Sean Parnell told Fortune. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Tensions reportedly intensified after a senior Anthropic official contacted a Palantir executive to seek clarification on Claude’s role in the operation, according to The Hill. The Palantir executive interpreted the inquiry as criticism of the model’s deployment and relayed details of the exchange to the Pentagon. During the raid, President Trump claimed the military employed a so-called “discombobulator” weapon designed to disable enemy equipment.
Anthropic denied discussing specific military operations.
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” a company spokesperson said in a statement to Fortune. “We have also not expressed concerns to industry partners outside of routine technical discussions.”
At the heart of the dispute are contractual limitations governing the use of AI in defense contexts. Anthropic CEO Dario Amodei has consistently argued for strict safeguards and regulatory oversight, acknowledging the inherent tension between safety priorities and commercial incentives. For months, Anthropic and the Defense Department have reportedly engaged in difficult negotiations over permissible uses of Claude.
Under its agreement with the Pentagon, Anthropic prohibits the use of its AI models for mass surveillance of Americans or for fully autonomous weapons systems. The company has also restricted applications involving “lethal” or “kinetic” military functions. Any direct involvement in combat scenarios could potentially conflict with those terms.
Despite these limitations, Anthropic occupies a prominent position among AI firms working with the government. Alongside OpenAI, Google, and xAI, Anthropic’s Claude remains the only large language model currently authorized on the Pentagon’s classified networks.
“Claude is used for a wide variety of intelligence-related use cases across the government, including the Department of War, in line with our Usage Policy,” Anthropic said.
The company added that it remains “committed to using frontier AI in support of U.S. national security” and described ongoing discussions with defense officials as “productive” and conducted “in good faith.”
AI and Defense Policy Frictions
While the Defense Department has accelerated AI integration efforts, xAI is the only major provider to grant model usage for “all lawful purposes.” Other companies continue to maintain operational restrictions.
Amodei has repeatedly emphasized the need for stronger user protections and regulatory frameworks, positioning Anthropic as a safety-focused alternative to competitors. “I’m deeply uncomfortable with these decisions being made by a few companies,” he said last November.
Although speculation had suggested Anthropic might relax certain constraints, the company now faces the possibility of exclusion from future defense partnerships. A senior Pentagon official told Axios that Defense Secretary Pete Hegseth is “close” to removing Anthropic from the military supply chain — a move that could pressure contractors to sever ties with the firm.
“It will be an enormous pain to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the official reportedly said.
Designating a company as a military supply chain risk is a rare step, historically applied to foreign entities such as Huawei during the 2019 national security ban. Sources cited by Axios suggested defense officials had been frustrated with Anthropic for some time.
The dispute underscores a broader debate between government agencies and AI developers. Defense officials argue that privately imposed ethical constraints may limit operational flexibility, while AI companies maintain that safeguards are essential to prevent misuse. As negotiations continue, the conflict increasingly reflects a larger struggle over who defines the boundaries of AI deployment in national security.