The U.S. Department of Defense and AI developer Anthropic are in a standstill following contentious discussions about eliminating usage safeguards that critics say could enable the government to use the company's technology to target weapons autonomously and to conduct U.S. domestic surveillance, three people familiar with the matter said.
After several weeks of negotiating contract terms, the two sides have failed to reach agreement, six people familiar with the talks, speaking on condition of anonymity, said. The crux of the dispute is whether certain restrictions embedded in commercial AI offerings should be removed so that military and intelligence personnel can deploy the models without being constrained by company-imposed usage policies.
According to the people familiar with the matter, the company’s stance on permissible uses of its AI has sharpened tensions between it and the Trump administration - a development that those people said has intensified disagreements for which further details have not been reported publicly. The discussions are being watched as an early test of whether Silicon Valley companies can influence how powerful commercial AI systems are used by U.S. national security agencies.
Pentagon officials, citing a Defense Department memo on AI strategy dated January 9, have argued that federal forces should be able to deploy commercial AI technology irrespective of firms’ internal usage policies, provided that such employment complies with U.S. law. That position underpins the department’s negotiating posture in the talks, people familiar with the discussions said.
A spokesperson for the department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.
Anthropic issued a statement saying its technology is already extensively used by the U.S. government for national security work. The company added that it is "in productive discussions with the Department of War about ways to continue that work."
Context and implications
Those close to the negotiations characterized the impasse as revolving around company-imposed safeguards and the government’s desire for operational freedom. The disagreement highlights a fault line between private sector usage controls and government requirements for deploying AI in sensitive national security contexts.
How the negotiations proceed could influence procurement relationships between the Defense Department and commercial AI vendors, though the outcome of these talks remains uncertain based on information available now.