US Defense Department and Anthropic Clash Over AI Access as Study Examines Nuclear Decision-Making in Simulations

Anthropic Ai. Photo Credit Reuters

A dispute has emerged between the US Department of Defense and the artificial intelligence company Anthropic over access to advanced AI systems, amid renewed debate about the risks associated with deploying such technology in military contexts.

US Defense Secretary Pete Hegseth has reportedly set a deadline for Anthropic to make its latest AI models available to the Pentagon. While Anthropic has stated it does not oppose military use of its systems in principle, the company has sought assurances that its models will not be used for mass surveillance of US civilians or for lethal operations without meaningful human oversight.

The Pentagon has not publicly detailed how it intends to use AI systems obtained from Anthropic or other leading developers, several of which have already agreed to provide access to their technologies. However, it has not formally accepted the conditions proposed by Anthropic. According to reports, the Defense Department could invoke Cold War-era legal authorities to compel cooperation or potentially exclude the company from future government contracts.

In a recent statement, Anthropic Chief Executive Dario Amodei said the company could not agree to the request under the current terms. He added that Anthropic would prefer to continue supporting the Department of Defense, provided its proposed safeguards are respected, and expressed hope that the department would reconsider its position.

The disagreement reflects broader tensions between the US government’s stated ambition to pursue an “AI-first” defense strategy and the emphasis on safety and responsible deployment promoted by some AI developers. Anthropic has positioned itself as prioritizing safety in model development and deployment.

The debate has intensified following reports that Anthropic’s Claude model was used by data analytics firm Palantir Technologies in connection with a separate contract involving US government operations. Neither Anthropic nor the Defense Department has provided detailed public clarification regarding the scope of such uses.

At the same time, new academic research has raised questions about how advanced AI systems behave in high-stakes military simulations. Professor Kenneth Payne of King’s College London conducted a study in which leading AI models developed by Google, OpenAI, and Anthropic were placed in simulated geopolitical scenarios as nuclear-armed states.

Anthropic Ai Photo Credit Reuters Webp
Anthropic AI. Photo Credit: Reuters

According to the study, the models opted to deploy nuclear weapons in a high proportion of the simulated scenarios, often escalating from conventional conflict to the use of tactical nuclear arms. While the systems generally refrained from initiating full-scale strategic nuclear exchanges targeting civilian population centers, they did issue such threats in certain scenarios when escalation dynamics intensified.

Professor Payne emphasized that the experiment was conducted in a controlled research setting. The models were aware they were participating in simulations and were not connected to real-world decision-making systems. He also noted that there is no indication that any nuclear-armed state is considering granting AI systems autonomous control over nuclear weapons.

The study’s findings, Payne said, highlight the difficulty of designing safeguards that reliably constrain AI behavior across all potential contexts of use. Commercial AI systems typically include safety mechanisms—often referred to as “guardrails”—that limit certain types of outputs. However, the Defense Department is reportedly seeking access to base or less restricted versions of some models for operational purposes.

Defence Secretary Pete Hegseth. Photocredit Reuters
Defence Secretary Pete Hegseth. Photocredit: Reuters

Anthropic has argued that, given the potential risks associated with advanced AI, explicit boundaries are necessary before such systems are deployed in sensitive military applications. Critics of the Pentagon’s approach, including AI researcher Gary Marcus, have cautioned against concentrating decisions about AI-enabled surveillance or weapons systems in the hands of a single official without broader legislative oversight.

As the deadline approaches, the outcome of the standoff may have implications not only for the relationship between AI developers and the US government, but also for how advanced AI systems are governed in national security settings.

Facebook
Twitter
LinkedIn
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *