AI use in warfare: Anthropic rejects US demand for ‘unrestricted’ military access
Anthropic CEO Dario Amodei stated Thursday that the company cannot agree to the Pentagon’s request for unrestricted access to its AI systems, setting off a public clash with the administration of Donald Trump that could jeopardize its federal contract as soon as Friday.
The developer of the Claude chatbot said it remains willing to continue discussions but expressed concern that updated contract terms from the United States Department of Defense failed to adequately restrict the technology’s potential use in mass domestic surveillance or fully autonomous weapons. Pentagon spokesperson Sean Parnell rejected those claims, stating that the military does not intend to deploy AI for illegal surveillance of Americans or for weapons systems operating without human oversight.
Anthropic’s internal guidelines bar such applications. Among major AI firms — including Google, OpenAI, and xAI — Anthropic remains the only one that has declined to provide its models for a new internal U.S. military AI network. Amodei noted that while the Defense Department can choose partners aligned with its objectives, the company hopes officials will reconsider given the value its technology offers to national defense.
Tensions escalated after Defense Secretary Pete Hegseth reportedly issued a deadline demanding full military access to Anthropic’s AI tools or risk termination of the contract. Officials also suggested the firm could be labeled a supply-chain concern or face measures under the Defense Production Act, granting the government expanded authority over its products. Amodei argued these warnings were contradictory, pointing out that labeling the company a security risk while simultaneously calling its technology essential sends mixed signals.
Parnell reiterated that the Pentagon seeks comprehensive lawful use of the AI system, asserting that broader access is necessary to protect critical military missions and emphasizing that no private company will dictate operational decisions.
The dispute, which has unfolded publicly after months of negotiations, has drawn scrutiny from lawmakers. Senator Thom Tillis criticized the Pentagon’s handling of the matter, suggesting such discussions should occur privately. Senator Mark Warner expressed concern over reports that the Defense Department may be pressuring the company, arguing that the episode highlights the urgent need for stronger AI governance frameworks in national security settings. Meanwhile, Pentagon leaders maintain that any AI deployment will comply with existing laws, even as the department continues internal changes to its legal oversight structure.
Voice Of Osiz
The ongoing standoff between Anthropic and the United States Department of Defense highlights a defining moment in the evolution of AI governance and national security collaboration. At Osiz, we believe this situation reinforces the urgent need for clear ethical boundaries, transparent AI deployment policies, and human-in-the-loop safeguards in mission-critical environments. As AI adoption accelerates across defense and public sectors, balancing innovation with responsibility is no longer optional — it is foundational. The debate also signals a broader shift where AI companies are becoming key stakeholders in policy discussions, not just technology vendors. Structured governance frameworks and compliance-driven architectures will shape the next phase of AI integration in sensitive domains. Enterprises and governments must align on lawful, secure, and accountable AI usage to avoid operational and reputational risks. As an AI-driven technology partner, Osiz advocates for building scalable AI systems that prioritize performance, compliance, and ethical integrity from the ground up.
Source: The Times Of India

