Anthropic is under review by the Pentagon as the U.S. Department of Defense reconsiders its relationship with the AI company, including a $200 million contract, amid concerns that the firm may object to certain military uses of its technology.
The Pentagon is reconsidering its relationship with Anthropic, including a $200 million contract.
Anthropic last year became the first major AI company cleared by the U.S. government for classified use, including military applications. The development drew limited public attention at the time. This week, however, scrutiny intensified after reports indicated the Pentagon could reassess its engagement with the company.
According to reporting, the Department of Defense is weighing whether to designate Anthropic as a “supply chain risk.” That designation is typically applied to companies with ties to nations subject to federal scrutiny, such as China. If applied, it could prevent the Pentagon from doing business with firms that use Anthropic’s AI in their defense work.
In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic is under review. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” Parnell said.
The message extends beyond Anthropic. OpenAI, xAI and Google currently hold Department of Defense contracts for unclassified work. They are pursuing higher-level clearances to participate in classified national security programs.
Safety Commitments and Military Use
Anthropic has positioned itself as one of the most safety-focused companies in artificial intelligence. The firm’s core product, Claude, is marketed with integrated safeguards intended to limit harmful applications.
The company provides a “custom set of Claude Gov models built exclusively for U.S. national security customers.” Anthropic has stated that it does so without violating its internal standards, including a prohibition on using Claude to produce or design weapons.
Anthropic CEO Dario Amodei has publicly said he does not want Claude involved in autonomous weapons or AI government surveillance. The company has denied reports that its model was used as part of a raid to remove Venezuela’s president Nicolás Maduro, despite external reporting to that effect.
The Department of Defense’s Chief Technology Officer Emil Michael addressed the question of limitations this week. “If there’s a drone swarm coming out of a military base, what are your options to take it down? If the human reaction time is not fast enough … how are you going to?” he asked reporters.
Michael’s remarks reflect a view within parts of the defense establishment that advanced AI tools may be necessary in scenarios where human response time is insufficient. His comments also underscored potential tension between corporate safety policies and operational military demands.
Industry Tensions Over Regulation and National Security
Anthropic has publicly supported AI regulation, a stance that distinguishes it from some other major AI companies and contrasts with the current administration’s approach to regulation. That position has added another dimension to the company’s relationship with federal agencies.
Researchers and executives across the industry frequently describe artificial intelligence as the most powerful technology yet developed. Many leading AI firms were founded on the premise that artificial general intelligence, or AGI, could be achieved while minimizing widespread harm.
Elon Musk, founder of xAI, previously expressed concerns about AI safety and co-founded OpenAI in part because he believed the technology was too dangerous to be left solely to profit-driven entities. Today, AI companies increasingly compete for government contracts, including those tied to defense and intelligence work.
Anthropic’s mission centers on embedding guardrails into its systems to prevent misuse. The company has referenced the concept popularized by Isaac Asimov’s laws of robotics, which include the principle that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
At the same time, leading AI laboratories are seeking integration into advanced military and intelligence systems. Defense officials argue that national security requires access to the most innovative technologies available.
While some technology firms in earlier years hesitated to work with the Pentagon, industry attitudes have shifted. AI developers in 2026 are more commonly pursuing government partnerships, even as questions persist about how their systems may be used in lethal contexts.
Palantir CEO Alex Karp has openly acknowledged the military applications of his company’s technology, stating, “Our product is used on occasion to kill people.” His comments highlight the degree to which AI and data analytics platforms have become embedded in modern defense operations.
The Pentagon’s review of Anthropic places the company at the center of a broader debate over how safety commitments intersect with classified military programs. The outcome could influence how AI firms balance internal safeguards with the operational expectations of national security agencies.
