The Core Tension
In January 2026, reports surfaced of a dispute between Anthropic and the Department of Defense regarding the deployment parameters for Claude-based systems in military planning applications. The dispute centered on whether Anthropic's acceptable use policy — which restricts certain categories of use — could be contractually waived in the context of a defense procurement.
This is not a story about one company and one contract. It is a story about the structural tension that emerges when AI systems carry fiduciary-like obligations to multiple principals with conflicting interests.
Background
Anthropic's Acceptable Use Policy prohibits the use of its models for certain categories of harm, including the development of weapons systems and the targeting of individuals. The Department of Defense, through its Joint AI Center, sought to deploy Claude-based systems in strategic planning and logistics optimization — applications that, depending on configuration, could implicate these restrictions.
The contractual question was straightforward: can Anthropic waive its own safety restrictions for a government customer? The legal question is considerably more complex.
Dual-Loyalty and Fiduciary Duty
Traditional fiduciary analysis identifies a principal and an agent, with the agent owing duties of loyalty, care, and disclosure to the principal. In the Anthropic-Pentagon context, the relevant question is: who is the principal?
Anthropic's users — researchers, developers, consumers — rely on the company's safety commitments as a form of implicit warranty. When Anthropic publishes an acceptable use policy, it creates reasonable expectations about the boundaries within which its systems will operate. These expectations have economic value. Researchers choose to build on Claude rather than competing systems in part because of these safety commitments.
Simultaneously, the Department of Defense is a contractual counterparty with legitimate operational requirements and the sovereign authority to define national security needs.
The Fiduciary Framework
The Digital Trust Act provides a useful analytical framework for this conflict, even though it is state legislation addressing different circumstances. The Act's core insight is that AI system operators owe fiduciary duties to the end users affected by their systems' decisions, and that these duties cannot be waived by contractual arrangement with third parties.
Applied to the Anthropic-Pentagon context, this framework suggests that Anthropic's safety commitments to its user base cannot be unilaterally overridden by a government contract — not because the government lacks authority to contract, but because the fiduciary obligation to existing users is structurally prior to any particular contractual relationship.
The Precedent Problem
There is limited case law directly addressing AI companies' fiduciary obligations to users. The closest analogs come from three areas of existing law.
Professional licensing and dual practice. Attorneys, physicians, and accountants who serve multiple clients with conflicting interests must either obtain informed consent from all parties or withdraw from conflicting representations. The duty of loyalty is not negotiable.
Common carrier and public utility obligations. Companies that hold themselves out as serving the public take on obligations that limit their ability to discriminate among users or to degrade service quality for some users in favor of others.
Consumer protection and unfair practices. When a company makes public commitments about the safety characteristics of its product, deviation from those commitments in favor of a specific customer may constitute an unfair or deceptive trade practice.
Analysis
The Anthropic-Pentagon dispute illuminates a structural problem that will recur across the AI industry. As AI companies scale, they will inevitably face conflicts between different categories of users and customers. The question is whether the industry develops principled frameworks for resolving these conflicts or resolves them ad hoc based on revenue and political pressure.
The Bonded Credential Solution
The Digital Trust Act's bonded credential system offers one structural solution. Under the Act, an AI agent's operator must post a surety bond that can be called upon if the agent violates its certified operating parameters. This creates a financial mechanism that makes safety commitments credible — the operator literally has capital at risk if the agent operates outside its declared boundaries.
Applied to the Anthropic scenario, a bonded credential would mean that Anthropic's acceptable use policy is not merely a policy document but a legally binding commitment backed by capital. Waiving the policy for a government customer would expose the surety to claims from affected users.
The Transparency Imperative
Whatever the resolution of the Anthropic-Pentagon dispute, the most damaging outcome would be a secret one. If AI companies quietly modify their safety commitments for government customers without disclosing this to their user base, the resulting information asymmetry undermines the entire trust infrastructure of the AI ecosystem.
Conclusion
The Anthropic-Pentagon dispute is a preview of conflicts that will define AI governance for the next decade. The legal frameworks we build now — including the Digital Trust Act's approach to fiduciary duty and bonded credentials — will determine whether these conflicts are resolved through principle or through power.
The answer matters not just for Anthropic and the Pentagon, but for every AI company, every government, and every user who relies on an AI system's stated commitments about how it will and will not operate.
The Fiduciary will continue to track this dispute as additional details emerge.