The European Commission appears to be blinking.

After years of positioning the EU AI Act as the global gold standard for AI governance — a comprehensive, risk-based framework with real enforcement teeth — Brussels is now floating a "Digital Omnibus" proposal that would delay the application of high-risk AI rules. The stated rationale: harmonised standards aren't ready, and companies need them to demonstrate conformity. The unstated rationale: industry pushback has been relentless, and the Commission's own standards bodies are behind schedule.

This matters far beyond Brussels. Every multinational building an AI compliance program has been calibrating to EU timelines. A delay doesn't eliminate obligations — it shifts when enforcement bites, creates regulatory arbitrage opportunities, and risks eroding the protections the AI Act was designed to deliver.

The Policy Landscape

The EU AI Act, which entered into force in August 2024, operates on a phased implementation schedule. Prohibited practices took effect first. Transparency obligations and general-purpose AI model rules followed. The most consequential provisions — governing high-risk AI systems — were set to apply in stages through 2025 and 2026, requiring conformity assessments, technical documentation, quality management systems, post-market monitoring, and incident reporting.[]

The architecture depends on harmonised standards developed by European Standardisation Organisations — primarily CEN and CENELEC — to give companies a concrete path to presumption of conformity.

The problem: those standards aren't ready. CEN-CENELEC Joint Technical Committee 21 has been working on AI-specific standards, but the timeline has slipped. Without finalised harmonised standards, companies face a compliance vacuum — they know what they must achieve but lack the recognised methodology for demonstrating they've achieved it.

The Commission's reported response is a "Digital Omnibus" simplification package that would, among other things, push back the application dates for high-risk AI obligations.[] Reports also suggest the European Data Protection Board has pushed back on aspects of the proposal, though the specific EDPB position requires confirmation from official EDPB opinions or statements.[]

Stakeholders and Their Interests

The European Commission and AI Office face a credibility problem. Enforcing rules without workable standards invites legal challenges and industry defiance. But delaying rules invites accusations of regulatory capture. The AI Office, tasked with coordinating implementation, needs standards to operationalise its oversight role.

European Standardisation Organisations (CEN/CENELEC) are under pressure to deliver technical standards that translate legal requirements into auditable criteria. Standards development is inherently slow — consensus-driven, technically complex, and politically fraught. Rushing produces bad standards. Waiting produces no standards.

AI providers and deployers — particularly large multinationals — have invested heavily in compliance programs benchmarked to published timelines. Some welcome delay as breathing room. Others, having already built compliance infrastructure, see delay as rewarding laggards and punishing early movers. The split tracks a familiar pattern: incumbents with compliance budgets prefer certainty; smaller players and late entrants prefer flexibility.

Deployers in high-risk sectors — healthcare, employment, law enforcement, critical infrastructure — face the sharpest tension. Their AI systems affect fundamental rights now, regardless of when Brussels says formal enforcement begins. A regulatory delay doesn't make a biased hiring algorithm less harmful.

Data protection authorities and the EDPB have a distinct concern: the AI Act's interaction with GDPR. Any simplification package that loosens AI-specific obligations without accounting for GDPR's independent requirements creates confusion about which regime governs what. The EDPB has consistently maintained that GDPR applies fully to AI systems processing personal data, and any perceived weakening of AI-specific rules could shift enforcement pressure onto data protection authorities who are already resource-constrained.

Affected individuals — the people subject to high-risk AI decisions — have no seat at the negotiating table. Their interests are represented, if at all, through civil society organizations and the regulatory framework itself. A delay in high-risk obligations is, functionally, a delay in their protections.

Analysis: What a Delay Does and Doesn't Change

The Standards Gap Is Real

The case for delay has a legitimate core. Conformity assessment without harmonised standards is like requiring a building inspection without a building code — inspectors can exercise judgment, but the results will be inconsistent, contestable, and expensive. Companies forced to demonstrate compliance through internal conformity assessment procedures without reference standards face genuine uncertainty about what "good enough" looks like.

This is not a new problem in EU product regulation. The New Legislative Framework has always contemplated the possibility that harmonised standards lag behind legislative requirements. The standard response: companies can use other means to demonstrate conformity, but lose the presumption that comes with following harmonised standards. It works, but it's costly and creates litigation risk.

Duties Don't Disappear

A delay in application dates does not eliminate the underlying obligations. It shifts when they become enforceable, not whether they exist. This distinction matters enormously for corporate governance.

The AI Act's prohibited practices are already in effect. Transparency obligations for certain AI systems are already in effect. GDPR applies independently and continuously. The Regulation (EU) 2016/679 (General Data Protection Regulation) requires data protection impact assessments for high-risk processing, imposes accountability obligations on controllers, and provides individual rights that AI deployers must respect — none of which are affected by AI Act timing changes.

Enforcement Focus Shifts

If high-risk obligations are delayed, enforcement attention concentrates on what's already live: prohibited AI practices (social scoring, real-time biometric identification in public spaces absent exceptions, manipulation techniques), transparency duties, and general-purpose AI model obligations. This creates an interesting dynamic.

Regulators with limited resources will pursue the cases they can bring. That means Article 5 prohibited practices and Article 50 transparency requirements become the enforcement frontier. Companies that have focused compliance efforts exclusively on high-risk classification may find themselves exposed on obligations they treated as secondary.

The Global Divergence Problem

The EU AI Act has functioned as a de facto global standard-setter — the "Brussels Effect" applied to AI governance. A delay undermines that gravitational pull.

U.S. companies operating under the NIST AI Risk Management Framework, sector-specific federal guidance, and emerging state-level AI legislation are not waiting for Brussels. The Colorado AI Act imposes obligations on deployers of high-risk AI systems effective February 2026. Enterprise procurement teams increasingly demand AI risk documentation regardless of regulatory jurisdiction. A company that tells its customers "we're waiting for the EU to finalize its timeline" will lose deals to competitors who can demonstrate robust AI governance now.

This is where the analysis connects to the broader fiduciary accountability framework. Under Pillar 1 — Duty of AI Due Care and Loyalty — the question isn't whether a regulator has set a deadline. The question is whether the entity deploying an AI system has exercised the care that the relationship with affected individuals demands. Regulatory timelines are floors, not ceilings.

The Bonded Credential Parallel

The standards gap exposes a structural weakness in the EU's approach: compliance depends on centralized standard-setting bodies operating on bureaucratic timelines, while AI capabilities evolve on market timelines. The mismatch is predictable and recurring.

The Minnesota Digital Trust framework's concept of bonded credentials offers a different architecture. Rather than waiting for a centralized body to publish a harmonised standard, bonded credentials allow issuers to make verifiable, auditable attestations about system properties — intended purpose, performance boundaries, monitoring commitments, known limitations — backed by financial bonds that create accountability independent of regulatory timing.

This maps directly to Pillar 2 — Transparency and Explainable Redress. The question isn't whether a harmonised standard exists for a particular AI system category. The question is whether the provider has made its system's properties, limitations, and risk profile transparent in a form that deployers and affected individuals can verify and act on. Standards facilitate that transparency; they don't create the underlying duty.

Unintended Consequences

Three risks deserve attention.

First, a delay signals that AI regulation is negotiable. Every future compliance deadline becomes a starting position rather than a commitment. Industry actors who invested in timely compliance learn that lobbying for extensions is a viable strategy. This corrodes regulatory credibility across the entire digital governance stack — not just the AI Act, but the Digital Services Act (Regulation (EU) 2022/2065), the Data Act (Regulation (EU) 2023/2854), and future instruments.

Second, the delay may widen the gap between EU and non-EU AI governance regimes, creating compliance fragmentation rather than the convergence Brussels has sought. Companies operating globally will face pressure to maintain the highest applicable standard across jurisdictions — which may no longer be the EU's.

Third, and most consequentially for Pillar 3 — Access to Justice and Liability — individuals harmed by high-risk AI systems during the delay period face an enforcement vacuum. The AI Act's incident reporting, post-market monitoring, and conformity assessment requirements exist because high-risk AI systems can cause serious harm. Delaying those requirements doesn't delay the harm. It delays the accountability infrastructure designed to address it.

Recommendations

For companies with existing AI Act compliance programs: Do not pause. Treat the current timeline as operative until a formal legislative amendment is published in the Official Journal. Even if dates shift, the substantive requirements will not change, and the compliance infrastructure you're building will be required eventually — and is defensible now under GDPR, sector-specific rules, and enterprise procurement standards.

For companies that haven't started: A delay is not a reprieve. Begin with system inventory and risk classification. Map your AI systems against Annex III high-risk categories. Document your classification rationale. This foundational work is timeline-independent and will be the first thing regulators and auditors request.

For legal and compliance teams: Monitor three things: (1) the formal Commission proposal text when published, (2) CEN-CENELEC JTC 21 standards development milestones, and (3) AI Office guidance documents that may provide interim compliance pathways. Build compliance evidence that doesn't depend on harmonised standards — internal risk assessments, testing protocols, monitoring frameworks, incident response plans — because these demonstrate due care regardless of the standards timeline.

For policymakers: If delay is necessary, pair it with interim enforcement guidance that maintains accountability for high-risk systems already in deployment. A regulatory gap is acceptable only if alternative accountability mechanisms — incident reporting to the AI Office, voluntary conformity assessment, transparency registrations — fill the space. Otherwise, the delay is a protection gap, and affected individuals bear the cost.

For the standards community: Publish draft standards for public comment even if finalisation is months away. Interim drafts give companies something to build against and reduce the binary nature of the standards-or-nothing compliance architecture. The perfect standard published late is worth less than the good standard published on time.

The Commission's instinct to align regulatory deadlines with standards readiness is defensible in principle. But the execution matters. A delay that maintains accountability pressure — through interim guidance, voluntary frameworks, and continued GDPR enforcement — is fundamentally different from a delay that creates a protection vacuum.

The fiduciary question remains constant: who owes duties to whom, and are those duties being honored? Regulatory timelines are instruments for enforcing duties, not the source of them. The duties exist because AI systems affect people. That doesn't change when a deadline moves.

Notes

  1. [] The AI Act's phased application dates are set out in Article 113 of Regulation (EU) 2024/1689. High-risk obligations for systems covered under Annex III apply from August 2, 2026, while those for Annex I systems (already regulated under existing Union harmonisation legislation) apply from August 2, 2027.
  2. [] The existence and precise scope of the Digital Omnibus proposal require verification against the formal Commission proposal text (COM document) or EUR-Lex entry. The Commission's AI regulatory framework page at digital-strategy.ec.europa.eu reflects phased implementation but does not itself confirm a specific omnibus delay proposal as of the date of analysis.
  3. [] EDPB involvement would likely take the form of a formal opinion or joint statement; verification against EDPB published documents is necessary before treating this as established fact.