EU Regulation
Enforcement Begins August 2026

The EU AI Act takes effect August 2026.
Is your AI governance ready?

The world's first comprehensive AI regulation is here. If your organization builds, deploys, or uses AI systems that touch EU citizens or operations, you're in scope. Non-compliance means fines up to 7% of global revenue. The compliance window is closing.

Time Until Full Enforcement:
--
Months
--
Days
--
Hours
7%
of global annual revenue — maximum fine for non-compliance with prohibited practices
85%
of organizations using AI have not started EU AI Act compliance programs
500+
pages of regulatory requirements across risk tiers, sectors, and use cases
16 mo
remaining until full enforcement — high-risk AI systems must be compliant by August 2026

Three pillars of the world's
first comprehensive AI law

The EU AI Act introduces a risk-based framework for AI systems. Every organization using AI in EU markets must classify their systems, implement controls, and prove compliance — or face enforcement.

⚠️

Risk Classification System

Every AI system must be classified into one of four risk tiers: unacceptable, high-risk, limited, or minimal. Your classification determines your obligations — from outright prohibition to transparency requirements to full conformity assessments.

Art. 5-7 — Risk Tiers
🚫

Prohibited Practices

Social scoring, real-time biometric surveillance, manipulation of vulnerable groups, and predictive policing are banned outright. Organizations must audit existing AI systems to confirm none fall into prohibited categories — violations carry the highest fines.

Art. 5 — Prohibited AI
📋

High-Risk AI Obligations

High-risk AI systems — hiring, credit scoring, healthcare diagnostics, critical infrastructure — require risk management systems, data governance, technical documentation, human oversight, and ongoing monitoring. This is where most compliance work lives.

Art. 8-15 — High-Risk Requirements

Four risk tiers. Know where you stand.

Unacceptable Risk

Banned

AI systems that pose clear threats to fundamental rights. Prohibited from February 2025.

Social scoring systems
Exploiting vulnerable groups
Real-time biometric ID
High Risk

Heavy Obligations

AI in critical sectors. Requires conformity assessments, documentation, and ongoing monitoring.

Hiring & recruitment AI
Credit scoring
Medical device AI
Critical infrastructure
Limited Risk

Transparency Required

AI systems that interact with people. Must disclose AI nature and meet transparency obligations.

Chatbots & virtual assistants
Deepfake generators
Emotion recognition
Minimal Risk

Voluntary Codes

Most AI systems. No mandatory requirements, but encouraged to follow codes of conduct.

Spam filters
AI-enhanced games
Inventory optimization

If you serve EU customers or use AI
in EU operations — you're in scope

The EU AI Act has extraterritorial reach. Like GDPR, it applies to any organization whose AI outputs affect EU citizens — regardless of where the company is headquartered.

🏥 Healthcare

AI-assisted diagnostics, clinical decision support, patient triage, and drug discovery tools serving EU patients or providers. Most medical AI qualifies as high-risk under Annex III.

💼 Financial Services

AI credit scoring, fraud detection, algorithmic trading, and robo-advisory serving EU markets. Creditworthiness assessment is explicitly named as high-risk.

🛡️ Defense & Government

AI used in law enforcement, border management, judicial systems, and public administration within EU jurisdictions. Subject to the strictest requirements.

💻 SaaS & Technology

Any AI-powered software product deployed to EU users — from customer service chatbots to AI content generation. Providers and deployers both carry obligations.

🏭 Manufacturing & Infrastructure

AI in safety-critical systems, quality control, predictive maintenance, and industrial automation. AI embedded in machinery falls under existing product safety regulation plus AI Act.

📚 Education & HR

AI-driven admissions, automated grading, hiring tools, and employee monitoring systems. Recruitment and educational AI are explicitly high-risk under Annex III.

Strategic AI Alignment
for EU AI Act Compliance

Altiri's framework maps directly to EU AI Act obligations — from initial AI inventory and risk classification through technical documentation, conformity assessment preparation, and ongoing governance. We turn 500+ pages of regulation into a clear compliance roadmap.

01

AI System Inventory & Classification

Catalog every AI system in your organization. Classify each against EU AI Act risk tiers. Identify prohibited uses, high-risk systems, and transparency obligations.

02

Gap Analysis & Compliance Roadmap

Map current governance posture against EU AI Act requirements. Identify specific gaps in risk management, data governance, documentation, and human oversight. Prioritize by enforcement timeline and risk exposure.

03

Implementation & Documentation

Build conformity assessment packages, technical documentation, risk management systems, and monitoring frameworks. Create the evidence trail regulators expect.

04

Ongoing Governance & Monitoring

Fractional vCAIO leadership to maintain compliance as the regulatory landscape evolves, new AI systems are deployed, and enforcement guidance matures.

What You Get
EU AI Act Compliance Package
AI System Inventory — complete catalog with EU AI Act risk classifications
Risk Classification Assessment — Art. 6 analysis for every AI system against Annex III categories
Prohibited Practices Audit — confirm no AI systems fall into Art. 5 banned categories
Technical Documentation — Art. 11 conformity documentation for high-risk systems
Risk Management Framework — Art. 9 continuous risk management system design
Data Governance Plan — Art. 10 training data quality and bias mitigation controls
Human Oversight Design — Art. 14 human-in-the-loop requirements for high-risk AI
Ongoing vCAIO Support — fractional AI governance leadership through enforcement
PP
Patrick Parker
Fractional vCAIO & AI Governance Lead
NIST AI RMF Practitioner
ISO 42001 Aligned
CMMC Registered Practitioner
Cross-Regulatory AI Governance
HIPAA & Financial Services GRC

"The EU AI Act isn't just a European problem. If your AI touches EU citizens — patients, customers, employees — you're in scope. The organizations that start their compliance journey now will have a defensible program by enforcement. The ones that wait will be scrambling through August 2026."

🌐

Cross-Regulatory AI Governance

Deep expertise mapping between NIST AI RMF, ISO 42001, and EU AI Act requirements. Builds unified governance programs that satisfy multiple regulatory frameworks simultaneously.

📋

Conformity Assessment Preparation

Experience preparing organizations for regulatory assessments across healthcare (HIPAA), defense (CMMC), and financial services — the same rigor applied to EU AI Act conformity requirements.

🛠️

Practical Implementation Focus

Translates complex regulatory text into operational governance programs. Every deliverable is designed to be used by your teams — not filed in a drawer.

The enforcement clock is ticking. Start your compliance program now.

Take the free AI Readiness Assessment to understand your current governance posture, identify high-risk AI systems, and get a prioritized compliance roadmap — before August 2026.

Free assessment · No commitment · Results delivered immediately