California SB 1142, the Digital Dignity Act, would create a statutory property right in an individual's "digital replica" and impose affirmative obligations on generative AI platforms to build revocation tools, verify identity claims, and face civil penalties when they fail.[]

That single sentence contains three distinct legal moves, each consequential on its own. Together, they represent the most aggressive state-level attempt to regulate AI-generated identity artifacts since deepfake-specific statutes began proliferating in 2023.

What the Bill Does

SB 1142 reframes the problem of AI-generated likenesses. Existing California law — principally Cal. Civ. Code § 3344 and Cal. Civ. Code § 3344.1 — protects individuals against unauthorized commercial use of their name, voice, and likeness through a right-of-publicity framework. That framework requires plaintiffs to prove specific elements: knowing use, commercial purpose, and in many cases, damages.

The Digital Dignity Act takes a different path. Rather than treating unauthorized replicas as a species of publicity misappropriation, the bill characterizes the digital replica itself as property belonging to the individual it depicts. This is not a cosmetic distinction. It changes standing analysis, remedial options, and — most critically — who bears the operational burden of prevention and remediation.

Key Provisions

The Property Right in Digital Replicas

The bill's foundational move is declaring that an individual holds a property interest in any digital replica of their likeness, voice, or other identifying characteristics generated by an AI system.

This matters for three reasons.

First, property rights carry different remedial weight than privacy or publicity claims. Injunctive relief becomes more straightforward — courts are accustomed to ordering the return or destruction of property, and the irreparable-harm analysis tilts in the plaintiff's favor when the claim sounds in property rather than tort.

Second, a property framing simplifies standing. The plaintiff need not demonstrate consumer confusion (as under the Lanham Act, 15 U.S.C. § 1125(a)), reputational injury, or specific economic loss. The unauthorized creation of the replica is itself the cognizable harm.

Third, property rights are alienable. If SB 1142 treats digital replicas as property, individuals can license, assign, and monetize them — creating a consent-management market that currently operates through contract and custom rather than statutory entitlement.

Affirmative Platform Obligations

The bill's most operationally significant provision requires covered generative AI platforms to provide user-facing tools for reporting and revoking unauthorized digital replicas.

This goes well beyond notice-and-takedown. The Digital Dignity Act appears to impose a product-design mandate: platforms must build the infrastructure for identity claims into their systems, not merely respond to external complaints.

The technical implications are substantial. "Revocation" in the context of generative AI is not a single operation. It could mean any combination of:

  • Delisting — removing a specific output from a platform's hosted content
  • Deletion — purging stored outputs from platform infrastructure
  • Prompt-blocking — preventing future generation of a reported individual's likeness through input filters or embedding-based classifiers
  • Machine unlearning — retraining or fine-tuning models to remove the capability to reproduce a specific likeness

Each of these carries different technical costs, different efficacy profiles, and different implications for model integrity. The bill's failure to specify which forms of revocation satisfy the statutory duty creates both compliance uncertainty and litigation opportunity.[]

Enforcement: Civil Penalties and Injunctive Relief

SB 1142 subjects noncompliant platforms to civil penalties and injunctive relief. The critical open questions — which the enrolled text must answer — are whether the bill creates a private right of action, authorizes enforcement only by the Attorney General or local prosecutors, or both.

If private plaintiffs can sue directly, the bill becomes a litigation engine. California's population, its concentration of AI companies, and its plaintiff-friendly procedural rules would generate significant case volume. If enforcement is limited to state actors, the bill's practical impact depends on prosecutorial priorities and resource allocation.

Compliance Implications

Identity-Claim Infrastructure

Platforms operating in California will need end-to-end identity-claim workflows: intake mechanisms for replica reports, verification processes to confirm the claimant's identity and the replica's existence, enforcement pipelines to execute revocation, and audit trails to document compliance.

This is not trivial. It requires dedicated engineering resources, trust-and-safety staffing, and — for platforms that serve as both model providers and consumer-facing applications — clear allocation of responsibility across the stack.

Technical Controls Against Re-Generation

The bill's revocation mandate implies that removing a single output is insufficient. Platforms must also prevent the re-generation of a reported replica. This requires some form of identity-aware filtering — blocklists, classifier-based detection, or provenance-checking systems that can identify when a generation request targets a protected individual.

The state of the art here is imperfect. Identity classifiers produce false positives and false negatives. Blocklists are trivially circumvented through prompt engineering. Provenance systems like C2PA metadata can verify the origin of an output but cannot prevent generation in the first place.

Platforms will need to document "reasonable technical measures" — a standard that will inevitably be defined through litigation rather than regulation.[]

Jurisdictional Scoping

California-specific legislation creates the perennial question: does the bill protect California residents regardless of where the platform operates, or does it regulate platforms available in California regardless of where the affected individual resides? The answer determines whether SB 1142 functions as a local consumer-protection statute or as a de facto national standard — the "California effect" that has driven privacy and environmental regulation for decades.

Preemption Risk

Defendants will raise two preemption arguments.

First, 17 U.S.C. § 301 preempts state-law rights that are "equivalent to" exclusive rights under the Copyright Act. If a digital replica qualifies as a "work of authorship" and the property right created by SB 1142 is functionally equivalent to copyright's reproduction or distribution rights, federal preemption applies. The counterargument — that a right in one's own likeness protects identity, not expression — has strong support in existing right-of-publicity caselaw, but the property framing introduces new texture.

Second, Section 230 immunity. As noted above, the bill's emphasis on platform conduct (building tools, maintaining systems) rather than publisher liability (hosting content) is likely designed to survive Section 230 challenges. But the line between "platform conduct" and "publisher decisions" is contested terrain, and defendants will push to characterize revocation obligations as content-moderation mandates subject to immunity.

Fiduciary Relevance

SB 1142 is best understood through two pillars of the Fiduciary Relevance Framework.

Duty of AI Due Care and Loyalty (Pillar 1)

The bill's affirmative platform obligations — build revocation tools, prevent re-generation, maintain audit trails — function as a statutory duty of care owed by platforms to the individuals whose likenesses their systems can reproduce. This is not a fiduciary duty in the common-law sense, but it operates in the same register: the platform possesses capabilities that create risk for identifiable individuals, and the law imposes an obligation to manage that risk proactively rather than reactively.

The property framing strengthens this analogy. A platform that holds or can generate someone's digital replica occupies a position structurally similar to a custodian holding someone's assets. The duty to provide revocation tools is the duty to return property on demand.

Transparency and Explainable Redress (Pillar 2)

The bill's enforcement mechanism — civil penalties and injunctive relief — only works if individuals can discover that unauthorized replicas exist and can navigate a comprehensible process to challenge them. This implies a transparency obligation that the bill may or may not make explicit: platforms must not only provide tools but must make those tools accessible, understandable, and effective.

The gap between "tool exists" and "tool works" is where most consumer-protection regimes fail. If SB 1142 does not specify response timelines, verification standards, or appeal mechanisms, platforms will build the minimum viable compliance apparatus — a reporting form that disappears into a queue.

Broader Significance

SB 1142 matters beyond California for three reasons.

First, it establishes a legislative template for treating AI-generated identity artifacts as property rather than as privacy violations or publicity misappropriations. This reframing — from tort to property — has downstream effects on remedies, standing, transferability, and the development of consent-management markets.

Second, it shifts the burden of identity protection from individuals to platforms. Under existing law, the person whose likeness is replicated must discover the violation, retain counsel, and prove their case. Under SB 1142, the platform must build the infrastructure to prevent and remediate violations before they are litigated. This is the difference between a right you can enforce and a right that is enforced for you.

Third, the bill's product-design mandate — requiring platforms to build revocation tools into their systems — aligns with the substrate-agnostic protection model advanced by frameworks like the Minnesota Digital Trust & Consumer Protection Act. The regulated harm is the function (unauthorized identity replication), not the medium (image, video, audio, text). The regulated duty is operational (build tools, maintain systems), not merely informational (disclose risks, post notices).

The bill has weaknesses. Its technical mandates may outpace the state of the art in machine unlearning. Its property framing invites preemption challenges. Its enforcement provisions — depending on final drafting — may prove either too aggressive (generating strike-suit litigation) or too weak (depending on under-resourced state enforcement).

But the direction is clear. California is building the legal infrastructure to treat your AI-generated likeness as yours — and to make the platforms that generate it responsible for keeping it that way.

That is fiduciary logic, whether the bill uses the word or not.

Notes

  1. [] SB 1142 was introduced by Senator Josh Becker. The bill text and legislative counsel digest are available at the California Legislative Information portal. Key provisions discussed here are drawn from the bill summary and publicly available descriptions; readers should consult the enrolled text for precise statutory language as the bill moves through committee.
  2. [] Machine unlearning remains an active area of research with no consensus on verification methods. A platform claiming to have "unlearned" a likeness currently has no standardized way to prove it. This gap between statutory obligation and technical capability will likely become a central compliance question.
  3. [] The "reasonable technical measures" framing echoes the "reasonable security" standard that has developed in data-breach litigation under state consumer protection statutes. Expect courts to look to industry practices, NIST frameworks, and expert testimony to define the floor.