Minnesota HF 2500 would prohibit health carriers from using algorithms or artificial intelligence to deny prior authorization requests — a categorical ban on automated adverse decisioning in one of health care's most consequential chokepoints.
The bill, which received a hearing before the Minnesota House Commerce Committee in February 2026,[] represents a governance choice that diverges sharply from the risk-management frameworks dominating AI regulation elsewhere. Where the EU AI Act and Colorado's Artificial Intelligence Act impose obligations on high-risk AI systems — documentation, bias testing, human oversight — HF 2500 draws a line at the outcome itself. No algorithm denies care. Period.
That distinction matters. It is the difference between regulating the tool and prohibiting the harm.
The Core Prohibition
HF 2500's operative provision is straightforward: health carriers may not use algorithms or artificial intelligence to deny prior authorization requests.[] The denial must come from a human reviewer who can articulate clinical rationale.
This framing is deliberate. Prior authorization denials are the sharp end of utilization management — the moment where an insurer's economic interest collides with a patient's clinical need. By requiring a human at that precise juncture, HF 2500 forces an identifiable person to own the decision and bear accountability for it.
Key Provisions and What They Require
Prohibition on Algorithmic Denials
The central mandate is the ban itself. Health carriers cannot use algorithms or artificial intelligence to produce denial outcomes for prior authorization requests. The breadth of this language raises immediate definitional questions.
If "algorithm" is read broadly — encompassing rules engines, decision trees, predictive models, and machine learning systems — the prohibition sweeps in virtually every automated utilization management tool that can generate an adverse determination. If read narrowly to target only machine learning or generative AI systems, carriers might argue that deterministic rules-based systems fall outside the prohibition.
The practical difference is enormous. Many utilization management platforms marketed as "AI-powered" are, under the hood, combinations of clinical rules engines and statistical models. A broad reading would require carriers to ensure no automated component is functionally determinative of a denial. A narrow reading would permit rules-based auto-adjudication while restricting only ML-driven denials.
This is where the Minnesota Digital Trust & Consumer Protection Act's analytical frame proves useful. The Act's substrate-agnostic protection principle holds that the nature of the harm, not the technology producing it, should drive regulatory response. A patient denied a medically necessary procedure doesn't care whether the denial was generated by a neural network, a gradient-boosted decision tree, or a hand-coded rules engine. The harm is identical. HF 2500's effectiveness depends on whether its drafters understood this.
Human Review Requirement
The corollary to the prohibition is an implicit mandate: denials must be made through human review. This is more than a "human-in-the-loop" requirement of the kind found in risk-management frameworks. Those frameworks typically permit AI to generate a recommendation that a human then rubber-stamps. HF 2500 appears to go further — the denial cannot be "based on" algorithmic output.
The causation standard here is critical. "Based on" could mean:
- Sole basis: The algorithm independently produced the denial with no human involvement.
- Substantial factor: The algorithm's output materially influenced the human reviewer's decision.
- Any reliance: The human reviewer had access to algorithmic recommendations that informed the denial.
Standard 1 is easy to comply with and easy to circumvent — just add a human click. Standard 3 would effectively ban AI from the denial workflow entirely, including as decision support. Standard 2 is the most likely intended reading, and the hardest to operationalize.
Carriers will immediately test the boundaries. Expect workflows where an AI system flags a case as "likely not meeting medical necessity criteria," a human reviewer receives that flag, and the reviewer issues a denial with a brief clinical narrative. Whether that constitutes a denial "based on" AI is the question HF 2500 must answer clearly — in the statute itself, not in future litigation.
Documentation and Evidentiary Requirements
Even without explicit documentation mandates in the bill text, the prohibition creates implicit evidentiary burdens. If a carrier must demonstrate that a denial was not based on algorithmic output, it needs records showing:
- The human reviewer's independent clinical analysis
- The information the reviewer considered (and did not consider)
- Whether algorithmic recommendations were available to the reviewer
- The reviewer's attestation that the denial reflects their own clinical judgment
This maps directly onto Pillar 2 of the Fiduciary Relevance Framework — Transparency and Explainable Redress. A denial backed by a documented human rationale is contestable in a way that an algorithmic output is not. The patient can read the rationale, identify errors, and mount a meaningful appeal. That is the governance gain.
Compliance Implications
For Health Carriers
Carriers operating in Minnesota face a workflow redesign. The standard utilization management stack — intake, clinical rules application, auto-adjudication, exception routing — would need to be restructured so that no adverse determination is produced by an automated system for Minnesota-covered lives.
This likely means:
- Geofencing or feature toggling within utilization management platforms to disable auto-denial functionality for Minnesota members
- Staffing increases for human reviewers, particularly for high-volume prior authorization categories (imaging, specialty drugs, surgical procedures)
- Training programs ensuring reviewers understand that their role is independent clinical judgment, not ratification of algorithmic recommendations
- Audit controls evidencing that denials were human-originated, including
decision logs,reviewer attestations, andclinical rationale documentation
The cost implications are real. Automated prior authorization exists because human review is expensive and slow. HF 2500 accepts that trade-off. The legislature's implicit judgment: the cost of human review is less than the cost of wrongful algorithmic denials.
For Utilization Management Vendors
Vendors providing prior authorization automation — companies like EviCore, Carelon (formerly AIM Specialty Health), and others — face product-level changes. Their platforms would need configurable pathways that suppress denial outputs for Minnesota-covered populations while maintaining automated workflows for other jurisdictions.
For Patients and Providers
The intended beneficiaries gain something concrete: a human being who must look at their clinical situation and explain why care was denied. That explanation creates a record. That record enables appeal. That appeal is the mechanism of accountability.
This is Pillar 3 — Access to Justice and Liability — in its most practical form. The right to contest an adverse decision is meaningless if the decision is unexplainable. By requiring human authorship of denials, HF 2500 makes the prior authorization appeals process functional rather than performative.
The Broader Significance
HF 2500 matters beyond Minnesota for three reasons.
First, it represents a categorical approach to AI governance in a domain where risk-management frameworks may be insufficient. The EU AI Act classifies health insurance AI as high-risk and imposes transparency, documentation, and human oversight obligations. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Art. 6, Annex III. Colorado's AI Act requires developers and deployers of high-risk AI systems to exercise reasonable care to avoid algorithmic discrimination. Colorado S.B. 24-205 (2024). Both assume the tool can be used safely with adequate controls. HF 2500 rejects that assumption for denial decisions. The question it poses to other jurisdictions: are there AI applications where no amount of risk management is sufficient, and prohibition is the only adequate response?
Second, it operationalizes fiduciary duty in the insurance context. Health carriers owe duties to their insureds — duties that exist independent of AI governance law. Minn. Stat. ch. 62A. When a carrier delegates medical necessity determinations to an algorithm, it interposes an unaccountable system between itself and the person to whom it owes a duty. HF 2500 removes that interposition. The carrier must act through a human agent who can be identified, questioned, and held responsible. This is Pillar 1 — Duty of AI Due Care and Loyalty — enforced through structural prohibition rather than after-the-fact liability.
Third, HF 2500 may catalyze a broader reckoning with automated utilization management. The prior authorization system is already under sustained attack — from providers who spend billions annually on compliance, from patients who experience delayed or denied care, and from federal regulators who have proposed (but not finalized) reforms under 42 U.S.C. § 300gg-19a and CMS rulemaking. Adding AI to an already dysfunctional system has accelerated harm without improving accuracy. HF 2500 is a legislative acknowledgment of that reality.
The bill's limitations are worth noting. A prohibition on AI denials does not fix the underlying incentive structure that drives excessive prior authorization. Human reviewers operating under production quotas and denial-rate targets can be just as harmful as algorithms — and harder to audit at scale. HF 2500 addresses the tool, not the incentive. But it is a necessary first step: you cannot reform a system you cannot see, and algorithmic denials are, by design, invisible.
If Minnesota pairs this prohibition with the kind of bonded credential and strict liability framework contemplated by the Digital Trust & Consumer Protection Act — credentialing AI systems used in non-adverse roles, imposing strict liability on issuers of automated tools that produce wrongful outcomes — HF 2500 becomes one component of a coherent governance architecture. The prohibition handles the most dangerous use case. Credentialing and liability handle the rest.
That architecture does not yet exist. But HF 2500 is building toward it, one bright line at a time.
Notes
- [] The Minnesota House of Representatives committee schedule for February 19, 2026 lists Commerce Committee activity. The specific agenda placement and hearing details for HF 2500 should be confirmed against official committee records at house.mn.gov.
- [] The precise statutory language of HF 2500, including definitions of "algorithm" and "artificial intelligence," the scope of covered entities (health carriers vs. utilization review organizations), and whether the prohibition extends to adverse modifications or terminations of previously authorized care, requires verification against the enrolled bill text. Analysis here is based on the bill as described in available committee materials and reporting.