In brief
- Dario Amodei says Anthropic will not remove bans on mass domestic surveillance and fully autonomous weapons.
- The Pentagon has threatened contract termination and possible action under the Defense Production Act.
- The standoff follows reports that the U.S. military used Claude to capture former Venezuelan President Nicolás Maduro
Anthropic CEO Dario Amodei said Thursday the company will not remove safeguards from its Claude AI model, escalating a dispute with the U.S. Department of Defense over how the technology can be used in classified military systems.
The statement comes as the Defense Department reviews its relationship with Anthropic and weighs potential consequences, including cancellation of the company’s $200 million contract and possible invocation of the Defense Production Act.
“We cannot in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors permit use of their systems for “any lawful use.”
While the Pentagon has since required AI vendors to adopt standard “any lawful use” language in future agreements, Anthropic remained the only frontier AI firm that resisted turning over control of its AI to the military.
On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted military use of Claude. The deadline reportedly is Friday of this week.
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei continued. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
In his statement, Amodei framed the company’s stance as aligned with U.S. national security goals.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he said.
He added that Claude is “extensively deployed across the Department of War and other national security agencies for intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.”
War on AI
The dispute unfolds against broader concerns about how advanced AI systems behave in high-stakes military scenarios. In a recent King’s College London study, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.
During a speech at SpaceX’s Starbase in Texas in January, Defense Secretary Pete Hegseth said the U.S. military plans to deploy the most advanced AI models.
That same month, reports surfaced that Claude was used during a U.S. operation to capture former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any specific military operations.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” he said. “We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
Despite this, Amodei said using these systems for mass domestic surveillance or autonomous weapons is incompatible with democratic values and presents serious risks.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he said. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
He also addressed the Pentagon’s threat to designate Anthropic a “supply chain risk” while also potentially invoking the Defense Production Act.
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” he said.
While Amodei has said the company will not comply with the Pentagon’s request, at the same time, Anthropic has revised its Responsible Scaling Policy, dropping a pledge to halt training of advanced systems without guaranteed safeguards in place.
Robert Weissman, co-president of Public Citizen, said the Pentagon’s posture signals broader pressure on the tech industry.
“The Pentagon is publicly bullying Anthropic, and the public part is intentional, because they want to pressure this particular company and send a message to all big tech and all corporations that we intend to do and take whatever we want and don’t get in our way,” Weissman told Decrypt.
Weissman described Anthropic’s guardrails as “modest” and aimed at preventing “improper surveillance of American people or to facilitate the development and deployment of killer robots, AI-enabled weaponry that could launch lethal strikes without humans say so.”
“Those are the most sensible and modest guardrails you could come up with when it comes to this powerful new technology.”
Regarding the Pentagon’s threat of designating Anthropic a “supply chain risk,” Weissman called it a potentially crushing penalty from the government, and argued it would pressure other AI firms to avoid imposing similar limits.
“Individuals might use Claude, but none of the AI companies, particularly Anthropic, have business models based on individual use; they’re looking for business use,” he said. “This is a potentially crushing penalty from the government.”
While the Pentagon has not yet said whether it plans to go through with its threat to terminate the contract or invoke the Defense Production Act, Weissman said the Pentagon is signaling to AI companies that it expects unrestricted access to their technology once it is deployed in government systems.
“The message of the Pentagon is, ‘we’re not going to tolerate this, and we expect to be able to use the technology as it’s invented for any purpose we want,’” Weissman said.
The Department of Defense and Anthropic did not immediately respond to Decrypt’s requests for comment.

