Washington State is moving to regulate consumer-facing AI chatbots through companion bills that would require developers and operators to disclose AI involvement at the point of interaction and implement documented measures to prevent biased outputs. HB 2225 and SB 5984, which have advanced through committee, represent a meaningful shift from reactive consumer protection enforcement toward affirmative, design-stage obligations for conversational AI systems.
Overview
The bills target a specific and increasingly ubiquitous use case: AI chatbots that interact directly with consumers. Rather than waiting for harm to materialize and then pursuing enforcement under Washington's existing Consumer Protection Act (Wash. Rev. Code § 19.86), these proposals would establish ex ante duties — transparency at the interaction layer and bias mitigation at the system layer. That combination matters. Disclosure alone is cheap compliance theater. Bias prevention alone is unverifiable without transparency. Together, they create an accountability architecture that gives regulators and consumers something to actually enforce against.[]
Key Provisions
Disclosure at the Point of Interaction
The bills would require covered AI chatbot developers and operators to provide clear, affirmative disclosure that a user is interacting with an AI system.
This is not a novel concept. California's SB 1001 (2018) required bots to disclose their non-human identity in certain commercial and electoral contexts. The EU AI Act's transparency obligations under Regulation (EU) 2024/1689, Art. 50 require that persons be informed when they are interacting with an AI system. Washington's contribution is applying this principle specifically to consumer-facing chatbots with what appears to be a broader commercial scope than California's narrower triggering conditions.
For regulated entities, the compliance implication is straightforward but operationally significant: every consumer-facing chatbot deployment in Washington needs a disclosure mechanism that is persistent, conspicuous, and not defeatable by interface design choices that minimize or obscure the notice. Developers selling chatbot platforms to enterprise customers will face contractual pressure to build these disclosure patterns into the product itself, not leave them as optional configurations.
Bias Mitigation as an Affirmative Duty
The more consequential provision is the requirement that developers and operators implement measures aimed at reducing discriminatory or biased outputs.
This is where the bills enter genuinely difficult territory. "Measures aimed at reducing bias" is a process standard, not an outcome standard. The distinction matters enormously for compliance and enforcement.
A process standard asks: did the developer test for bias? Document the results? Implement remediation workflows? Monitor outputs post-deployment? An outcome standard asks: did the chatbot produce discriminatory results? The former is auditable through governance documentation. The latter requires defining what constitutes a biased output in the context of generative conversational AI — a problem that neither computer science nor law has solved with any precision.[]
The practical effect is that developers will need documented bias-risk controls: pre-deployment evaluation protocols, ongoing monitoring systems, remediation workflows, and incident response procedures. This maps closely to the risk management practices described in the NIST AI Risk Management Framework (AI RMF 1.0) (2023), particularly the Map, Measure, and Manage functions. Washington may not reference NIST directly, but compliance programs built around the AI RMF's structure will be well-positioned to satisfy process-based bias-mitigation duties.
Vendors selling chatbot systems into Washington should expect enterprise customers to demand compliance artifacts: audit results, model evaluation reports, bias testing documentation, and logging capabilities sufficient to reconstruct how a given output was generated. Indemnification provisions in vendor contracts will increasingly allocate bias-related liability, and the existence of a statutory duty gives those contractual provisions real teeth.
Enforcement Architecture
The available summaries do not specify whether the bills create standalone enforcement mechanisms, establish per se violations of the Washington Consumer Protection Act, or authorize private rights of action. This is the most consequential open question.
If the bills route enforcement through Washington's existing CPA apparatus, chatbot transparency failures become a species of unfair or deceptive practice — expanding the Attorney General's enforcement toolkit without requiring proof of traditional deception elements like reliance. A per se violation structure would mean that failure to disclose AI involvement or failure to implement bias-mitigation measures is itself the violation, regardless of whether any consumer was actually deceived or harmed.
The "affirmative duties" framing in the bill summaries supports enforcement theories that bypass traditional deception analysis. If the statute creates a standard of conduct — disclose, mitigate bias — then the enforcement question becomes whether the developer met that standard, not whether any particular consumer was misled. That is a significant doctrinal shift for AI governance at the state level.
Compliance Implications
For chatbot developers and operators: Build disclosure into the product, not the terms of service. Implement and document bias-testing protocols that can withstand regulatory scrutiny. Assume that Washington's AG office will treat these duties as independently enforceable, not merely as factors in a broader CPA analysis.
For enterprise customers deploying third-party chatbots: Contractual due diligence must now include verification that the vendor's chatbot platform supports Washington-compliant disclosure and that the vendor can produce bias-mitigation documentation on demand. Indemnification clauses should specifically address statutory liability under these bills.
For compliance teams: The bias-mitigation duty creates a documentation imperative. The question regulators will ask is not "is your chatbot biased?" — a question no one can answer definitively — but "what did you do to identify and reduce bias, and can you show us?" Process documentation is the compliance artifact. Invest in it now.
Broader Significance
Washington's chatbot bills matter beyond their immediate jurisdiction for three reasons.
First, they represent the clearest articulation yet of a combined disclosure-plus-governance duty for consumer-facing AI at the state level. Most existing state AI laws address either transparency (California's bot disclosure law) or discrimination (Colorado's SB 21-169 for insurance, Illinois' BIPA and AIPA for employment). Washington is combining both vectors in a single statutory framework targeted at a specific, high-volume interaction type.
Second, the bias-mitigation duty pushes toward a fiduciary-like obligation for AI system operators. Analyzed through the Fiduciary Relevance Framework, these bills implicate at least three of the four pillars. Under Duty of AI Due Care and Loyalty (Pillar 1), the affirmative obligation to mitigate bias is a care standard — developers must exercise reasonable diligence to prevent foreseeable harm from their systems. Under Transparency and Explainable Redress (Pillar 2), the disclosure requirement ensures that consumers know they are interacting with an AI system, a prerequisite for meaningful consent and for any subsequent challenge to the system's outputs. Under Access to Justice and Liability (Pillar 3), the enforcement architecture — particularly if it includes per se CPA violations or private rights of action — determines whether these duties are aspirational or actionable.
The Minnesota Digital Trust & Consumer Protection Act provides a useful comparative lens here. Washington's disclosure duty resembles the substrate-agnostic protections that the Minnesota framework envisions — protections that travel with the interaction regardless of platform or medium. The bias-mitigation duty resembles a fiduciary-like obligation to avoid foreseeable harm, though Washington's bills do not appear to adopt the bonded credential or strict liability mechanisms that give the Minnesota framework its enforcement edge.[]
Third, these bills will accelerate convergence toward national AI governance practices even for systems not otherwise classified as "high-risk." Consumer-facing chatbots are everywhere — customer service, healthcare navigation, financial guidance, government services. A state-level duty to mitigate bias in these systems creates compliance pressure that radiates outward. Developers building for a national market will not maintain separate bias-mitigation programs for Washington and non-Washington deployments. They will build to the higher standard and apply it everywhere. That is how state legislation drives de facto national regulation.
The open questions are significant. Definitions of "developer," "operator," "chatbot," and "bias" will determine scope. Exemptions for small businesses, open-source developers, or specific sectors will determine reach. Preemption challenges — particularly if federal AI legislation advances — could limit durability. And the perennial question of how to evaluate bias-mitigation compliance for generative AI systems remains genuinely unsettled.
But the direction is clear. Washington is building toward a regulatory posture where operating a consumer-facing AI chatbot carries affirmative legal duties — not just to avoid deception, but to actively govern the system's behavior. That is the trajectory of AI accountability law: from "don't lie" to "prove you tried not to harm."
Notes
- [] Bill numbers (HB 2225, SB 5984), committee advancement status, and timeline are sourced from the Transparency Coalition AI Legislative Update dated February 20, 2026. Operative bill text, including precise definitions, scope limitations, exemptions, and enforcement mechanisms, should be verified against the Washington State Legislature's official bill pages before reliance.
- [] For a detailed treatment of bias measurement challenges in generative AI systems, see NIST AI 100-2e2023, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations" (2024), and the NIST AI RMF Playbook's discussion of the Measure function. Process-based compliance frameworks are generally preferred by regulators for current-generation LLM-based systems because outcome-based parity metrics presuppose a stable, measurable output distribution — a condition that generative systems do not reliably satisfy.
- [] Compare the Minnesota Digital Trust & Consumer Protection Act's bonded credential framework, which requires AI system operators to post financial bonds as a condition of deployment in certain contexts, creating a direct financial incentive for compliance that does not depend on enforcement action frequency.