The Colorado Attorney General has initiated rulemaking to implement the state's Artificial Intelligence Act, the process that will transform a landmark statute into an operational compliance regime for developers and deployers of high-risk AI systems.

This is not a symbolic gesture. Colorado's AI Act (SB24-205, signed into law 2024) imposed affirmative governance duties on entities that build and deploy AI in consequential decision-making contexts. But statutes set the duty. Rulemaking defines the playbook. The Attorney General's office is now building the enforcement architecture that will determine whether Colorado's framework becomes a serious regulatory regime or an aspirational statement.

What the Statute Does

Colorado's AI Act is among the first broad, cross-sector state laws to impose structured obligations on both sides of the AI supply chain. Developers who build high-risk AI systems and deployers who put them into operation each carry distinct but interlocking duties.

The core mechanism is the impact assessment. Deployers of high-risk AI systems must conduct and document assessments evaluating the risks their systems pose—particularly discrimination risks—before deployment and on an ongoing basis. Developers, in turn, must provide the technical documentation deployers need to fulfill those obligations.

The statute also establishes a reasonable care standard. Both developers and deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is a duty of care, not a duty of perfection—but the rulemaking will determine what "reasonable" actually demands in practice.

Key Provisions the Rulemaking Must Operationalize

Defining "High-Risk" at the Boundaries

The statute defines high-risk AI systems by reference to consequential decisions. The clear cases are straightforward: an AI system that determines creditworthiness or screens job applicants is high-risk. But the boundaries are where compliance gets expensive and litigation gets interesting.

The rulemaking must address decision-support tools that inform but don't dictate outcomes. It must grapple with human-in-the-loop systems where a person nominally reviews AI recommendations but functionally rubber-stamps them. And it must determine how automated triage—systems that route cases or prioritize queues without making final determinations—fits within the statutory framework.

These are not academic questions. Every deployer in Colorado needs to know whether their particular use of AI triggers the full impact assessment and risk management apparatus. The AG's definitions here will drive compliance budgets across industries.

Impact Assessment Content and Cadence

The statute requires impact assessments. The rulemaking must specify what goes in them and how often they're updated.

Expect the rules to address, at minimum:

  • Pre-deployment testing requirements, including discrimination testing methodologies and statistical thresholds
  • Documentation standards for training data provenance, model architecture decisions, and known limitations
  • Ongoing monitoring obligations, including triggers for reassessment when system performance degrades or deployment context changes
  • Retention and availability requirements specifying how long assessments must be maintained and under what circumstances they must be produced to the AG's office

The cadence question matters enormously. A requirement to update impact assessments annually is a different compliance burden than a requirement to update them whenever the model is retrained or the deployment context materially changes. The rulemaking will set this baseline.

Developer-Deployer Responsibility Allocation

This is where the statute's architecture gets genuinely novel, and where the rulemaking has the most room to shape market dynamics.

Developers must provide deployers with sufficient documentation to support deployer-side compliance. But what counts as "sufficient"? The rulemaking must address:

  • Whether developers must provide model cards or equivalent technical documentation as a statutory floor
  • What audit rights deployers can contractually require—and whether the rules create default expectations for audit cooperation
  • How responsibility flows when a deployer relies on a developer's representations about system performance and those representations prove inaccurate
  • Whether contractual flow-down provisions between developers and deployers will be treated as evidence of reasonable care or as insufficient without independent verification

Vendors of AI systems should expect immediate contractual pressure. Deployers facing statutory duties will push those duties upstream through procurement requirements, audit clauses, and indemnification provisions. The rulemaking's specificity—or lack thereof—on developer obligations will determine how much of that pressure sticks.

The "Reasonable Care" Standard

The statute's reasonable care standard is both its greatest strength and its greatest source of uncertainty. It avoids the rigidity of prescriptive checklists while creating genuine ambiguity about what compliance looks like.

The rulemaking can resolve this in several ways. It could define compliance safe harbors tied to recognized frameworks—alignment with the NIST AI Risk Management Framework (AI RMF 1.0), 2023 being the most likely candidate. It could establish minimum procedural requirements that, if followed, create a rebuttable presumption of reasonable care. Or it could leave the standard flexible and let enforcement actions define its contours case by case.

The first approach gives regulated entities the most certainty. The third gives the AG the most enforcement flexibility. Expect the final rules to land somewhere in between: procedural minimums that don't fully insulate deployers from liability but do provide meaningful compliance guidance.

Enforcement Architecture

The Colorado Attorney General holds enforcement authority under the Act. There is no private right of action—consumers cannot sue directly under the statute. This concentrates enforcement discretion in a single office and makes the rulemaking record a critical signal of enforcement priorities.

Watch for signals in the rulemaking about:

  • Whether discrimination testing will be a primary enforcement focus, with specific testing methodologies required or recommended
  • Whether incident reporting obligations will be imposed, requiring deployers to notify the AG's office when high-risk systems produce discriminatory outcomes
  • How enforcement will interact with existing authority under the Colorado Consumer Protection Act, Colo. Rev. Stat. § 6-1-101 et seq., potentially allowing the AG to pursue AI-related harms under both frameworks
  • Whether the rules create any compliance defense that limits penalties for entities that maintained documented, good-faith governance programs

Compliance Implications

For Deployers

Organizations using AI in any of the statute's consequential decision categories—employment, housing, credit, insurance, education, healthcare—should begin building impact assessment infrastructure now. The rulemaking will specify requirements, but the structural elements are clear from the statute: you need documented risk identification, discrimination testing, mitigation measures, and ongoing monitoring.

Do not wait for final rules to start. The statute's duties are already law. The rulemaking clarifies the floor, not the ceiling.

For Developers

AI system vendors face a new documentation burden that will reshape product delivery. Technical documentation, testing results, performance benchmarks, known limitations, and training data descriptions will become standard deliverables—not because customers ask nicely, but because customers face statutory liability if they deploy without them.

Developers who build documentation and audit cooperation into their product offerings now will have a competitive advantage. Those who resist will face contractual friction and potential downstream liability exposure.

For Multi-State Operations

Companies operating across multiple states face the fragmentation problem. Colorado's rules will establish one compliance baseline. Other states will establish others. The rational response is to build to the highest common denominator.

Colorado's rulemaking, if it produces detailed and workable requirements, could become that denominator by default. Companies that build governance programs to satisfy Colorado's requirements will likely satisfy less prescriptive regimes elsewhere. This is how state-level regulation produces de facto national standards in the absence of federal action.

The Digital Trust Lens

Colorado's framework, viewed through the analytical frame of the Minnesota Digital Trust & Consumer Protection Act, reveals both alignment and gaps.

The alignment is structural. Both frameworks recognize that entities introducing AI-driven risk into consequential decisions owe affirmative duties to the people affected. Both reject the notion that deploying an AI system is a neutral act that shifts responsibility to the consumer. Both impose documentation and assessment obligations that function as governance substrates—creating auditable records of who decided what, based on what information, with what safeguards.

Colorado's impact assessment and risk management obligations function as a form of issuer-like responsibility. A deployer that puts a high-risk AI system into operation is, in effect, issuing a consequential digital instrument—a credit decision, an insurance determination, an employment screen—and the statute demands that the deployer document the provenance, performance, and foreseeable harms of that instrument before it touches a consumer.

The gap is in enforcement architecture. Colorado relies on AG enforcement alone. The Minnesota Digital Trust Act's vision of bonded credentials and strict liability for certain digital trust failures creates a different accountability mechanism—one that doesn't depend on prosecutorial discretion to activate. Colorado's approach works when the AG's office is resourced and motivated. It fails when enforcement attention shifts elsewhere.

The deeper gap is in substrate agnosticism. Colorado's Act is AI-specific. It regulates AI systems making consequential decisions. The Minnesota framework's substrate-agnostic approach would capture the same harms whether they're produced by a neural network, a decision tree, a rules engine, or a human following an algorithm's recommendation. Colorado's rulemaking could narrow or widen this gap depending on how broadly it defines the systems and decisions within scope.

Why This Rulemaking Matters Beyond Colorado

Colorado is writing the first detailed operational playbook for comprehensive state AI governance in the United States. The EU Artificial Intelligence Act, Regulation (EU) 2024/1689 provides a European model, but U.S. state regulators face different constitutional constraints, different enforcement traditions, and different market dynamics.

The rules that emerge from this process will be studied, copied, and adapted by every state legislature and attorney general considering AI regulation. They will be cited in federal policy debates as evidence of what state-level AI governance looks like in practice. And they will be tested—by regulated entities seeking clarity, by enforcement actions seeking accountability, and by affected consumers seeking redress.

The rulemaking process itself is an opportunity. The Colorado AG's rulemaking page provides the procedural details and participation mechanisms. Stakeholders who engage now—filing comments, attending hearings, proposing workable compliance pathways—will shape the rules that govern AI accountability for years.

The question Colorado is answering is the question every AI governance framework must answer: what does it mean, concretely and enforceably, for an entity that deploys a high-risk AI system to owe a duty of care to the people that system affects? The rulemaking will produce Colorado's answer. The rest of the country will be reading it.