The Executive Order

On January 20, 2026, President Trump signed Executive Order 14178, titled "Removing Barriers to American Leadership in Artificial Intelligence." The order revoked the Biden administration's October 2023 AI safety executive order and directed federal agencies to take several significant actions.

Among these actions, two have direct implications for state AI legislation:

  1. A directive to federal agencies to identify and reduce regulatory barriers to AI deployment, with a specific instruction to consider whether state-level regulations create an undue burden on interstate commerce.

  2. An assertion that AI governance is "a matter of national competitiveness requiring federal coordination" — language that, while not self-executing, signals a potential preemption strategy.

The Preemption Landscape

Federal preemption of state law occurs through three mechanisms, each with different implications for state AI legislation.

Express Preemption

Congress can explicitly preempt state law by including a preemption clause in federal legislation. No current federal AI legislation contains a broad preemption clause, though several bills have included narrow preemption provisions for specific regulatory domains.

Conflict Preemption

State law is preempted when it conflicts with federal law — either because compliance with both is impossible, or because state law stands as an obstacle to the full accomplishment of federal objectives.

EO 14178's directive to reduce regulatory barriers to AI deployment could be interpreted as establishing federal objectives that state AI regulations obstruct. However, the case law on obstacle preemption requires more than a general policy preference — the federal government must identify specific, concrete conflicts between state law and federal regulatory objectives.

Field Preemption

When Congress legislates so comprehensively in a field that it leaves no room for state regulation, field preemption applies. The AI governance space is far from comprehensive federal regulation — Congress has not passed a single major AI bill, and the executive order itself acknowledges this gap by calling for reduced regulation rather than comprehensive federal oversight.

The State Law Toolkit

States are not without tools to defend their AI legislation against preemption challenges.

The Police Power

The Supreme Court has consistently held that states have broad authority to regulate for the health, safety, and welfare of their citizens. Consumer protection — the primary focus of state AI legislation like the Colorado AI Act and the Minnesota Digital Trust Act — is a core exercise of the police power.

This presumption against preemption is particularly strong in areas of traditional state regulation. Consumer protection, tort liability, and professional licensing — all central to state AI legislation — are quintessentially state concerns.

The Savings Clause Strategy

Smart state AI legislation includes savings clauses that explicitly preserve complementary federal regulatory authority while asserting state jurisdiction over consumer protection and civil liability. The Minnesota Digital Trust Act takes this approach, framing its requirements as consumer protection measures that supplement rather than conflict with any future federal AI framework.

The Dormant Commerce Clause Limit

Even absent federal preemption, state AI legislation must navigate the dormant Commerce Clause, which prohibits state laws that discriminate against or unduly burden interstate commerce. AI companies will inevitably argue that a patchwork of state regulations creates exactly this kind of burden.

The strongest defense against dormant Commerce Clause challenges is to draft state laws that regulate conduct rather than status — that focus on what AI systems do within the state's borders rather than how they are built or where they are headquartered.

The Digital Trust Act's Preemption Resilience

The Minnesota Digital Trust Act is designed with preemption resilience in mind. Several features make it particularly resistant to federal preemption challenges.

Consumer protection framing. The Act is structured as consumer protection legislation, placing it squarely within the state's traditional police power. A preemption challenge must overcome the Rice v. Santa Fe Elevator presumption.

Conduct-based regulation. The Act regulates conduct — the deployment of AI agents in covered transactions — rather than the design or development of AI systems. This avoids dormant Commerce Clause concerns about extraterritorial regulation of AI development.

Complementary design. The bonded credential system does not conflict with any existing or proposed federal AI regulation. It creates an additional layer of accountability that supplements rather than displaces federal oversight.

Economic mechanism. The Act's reliance on surety bonds rather than command-and-control regulation makes it structurally different from the kind of prescriptive state regulation that is most vulnerable to preemption. Bond requirements are common in regulated industries and have never been held to be preempted by federal law as a general category.

What Comes Next

The executive order is the opening move in what will be a multi-year battle over AI governance authority. Several developments are likely in the near term.

Congressional action. The administration's preference for reduced AI regulation will be tested against bipartisan support for AI accountability measures. Several bills currently in committee include provisions that could affect state regulatory authority.

Industry litigation. AI companies operating in multiple states have a strong incentive to challenge state-level regulation as burdensome and inconsistent. The first major preemption challenge to state AI legislation is likely within the next 12 months.

State coordination. States pursuing AI legislation have begun informal coordination to reduce inconsistency across jurisdictions — a strategy that weakens both preemption and dormant Commerce Clause challenges.

Conclusion

The executive order changes the political context for state AI legislation but does not change the legal analysis. States retain broad authority to protect their citizens from AI-related harms, and that authority is supported by centuries of preemption doctrine.

The real question is not whether states can regulate AI — they clearly can — but whether they will do so in ways that are structurally resilient to the preemption challenges that are certainly coming.


This analysis will be updated as relevant legislation and litigation develop.