AIGN Global EU 2024/1689 · Rev 2026-03
Regulatory Intelligence · Standards Reference

AIGN EU AI Act
Standards Development Map

Current working map of the Artificial Intelligence Act (EU 2024/1689) against emerging CEN-CENELEC and related standards activity under JTC 21. Designed as a regulatory orientation, planning and governance-design aid for high-risk AI, GPAI obligations, prohibited practices and conformity pathways. It distinguishes legal obligations, evolving standards coverage and AIGN interpretation logic. This is an interpretive reference, not legal advice.

Regulation (EU) 2024/1689
OJ L, 12.7.2024 · 144 pages
Map updated: 2026-03-24
Next major application date: 2026-08-02
What is binding now
The law text is binding on its own timeline. Legal obligations come from the EU AI Act itself, not from draft standards.
What is still emerging
Most standards shown here are development-stage pathways. They may support future conformity, but they do not yet create presumption of conformity.
What organisations should do now
Classify systems, assign provider/deployer roles, define accountable owners, build evidence packs, and prepare controls before OJEU citations exist.
AIGN reading
The real gap is not legal text alone. It is the operational governance layer between obligation, evidence, ownership and defensible decisions.
15
Standards total
incl. normative refs
2
Stage 10–10.99
Drafting
6
Stage 20–30.99
Working / Committee
4
Stage 40–50.99
Enquiry / DIS
3
Stage 60+
Published
0
Cited in OJEU
Conformity applies
This map is based on draft standards at public enquiry (DIS) or CEN-CENELEC project information. It reflects intended coverage as standards are being developed. Final harmonised standards may differ from current drafts. Enquiry-stage drafts are not yet approved by national bodies. This is an interpretive compliance aid and planning reference. Only the EU AI Act itself is legally binding, and only OJEU-cited harmonised standards can create presumption of conformity. Always consult the official EU AI Act text (OJ L 2024/1689) and published OJEU citations for compliance decisions. No cited standards exist yet — presumption of conformity does not yet apply.
Stage 10+ Drafting
Stage 20+ Working Draft
Stage 40+ Enquiry Draft
Stage 60+ Published
Intended Presumption of Conformity
Normative Reference
GPAI Obligation
Legal obligation
Emerging standards map
AIGN interpretation
AIGN Interpretation Layer
How to read this map defensibly
This page separates law, emerging technical standardisation and implementation interpretation. That matters because organisations often mistake a draft standard for a legal requirement, or assume that absence of a final standard means action can wait. Under the AIGN reading, neither assumption is safe.
Board relevance
Decision risk: unclear ownership creates liability before a technical control failure is ever discussed.
Evidence risk: without logs, documentation, oversight records and monitoring plans, compliance is hard to defend.
Timing risk: waiting for perfect harmonisation usually delays governance too long.
Provider / Deployer implication
Providers need design, QMS, evidence and conformity pathways.
Deployers need role clarity, instructions-for-use controls, oversight capability and operational monitoring.
Both need a documented governance layer that survives audit, incident review and escalation.
Chapter III, Section 2 · Arts. 8–15 + 17
High-Risk AI Systems — Legal Requirements, Emerging Standards Mapping & Evidence View
EU AI Act Article · legal obligation
Primary Standard · emerging pathway + AIGN interpretation
Normative References · supporting material
Art. 9
Legal obligation
Risk Management System
Continuous iterative process across the full AI lifecycle. Identification, estimation, evaluation and mitigation of risks to health, safety, fundamental rights.
Binding legal requirementBuild before harmonisation
Expected evidence
Documented risk methodology, risk register, mitigation records, residual-risk decisions, review cadence.
prEN 18228 Stage 20
AI Risk Management
Emerging standard + AIGN interpretation
Risk identification, estimation, evaluation and treatment for AI systems throughout lifecycle
AIGN implementation relevance
Use this as a design reference for lifecycle risk controls, not as a substitute for legal classification or accountability decisions.
Emerging standards pathwayAIGN: operationalise lifecycle accountability
Art. 9 Annex Z
prEN 18283:—
Concepts, measures and requirements for managing bias in AI systems
Parallel development
EN ISO/IEC TS 12791:2024
Treatment of unwanted bias in classification and regression ML tasks
Parallel development
Art. 10
Data and Data Governance
Training, validation and testing data quality. Data governance practices, bias examination, representativeness, statistical properties, data gap identification.
Binding legal requirementData governance cannot wait
Expected evidence
Dataset inventory, provenance records, representativeness checks, bias testing, data quality criteria, gap remediation log.
prEN 18284 Stage 20
Quality and Governance of Datasets in AI
Data quality criteria, governance practices, bias detection, dataset representativeness and completeness
Emerging standards pathwayAIGN: make deployer-impact visible
AIGN implementation relevance
Most organisations need a data-governance decision layer linking datasets, intended purpose, affected populations and residual bias acceptance.
Art. 10 Annex Z
prEN 18283:—
Concepts, measures and requirements for managing bias in AI systems
Parallel development
EN ISO/IEC TS 12791:2024
Treatment of unwanted bias in classification and regression ML tasks
Parallel development
Art. 11 · 12
Technical Documentation & Record-Keeping
Pre-market technical documentation (Annex IV). Automatic logging of events over system lifetime. Traceability of inputs, outputs, decisions and human verification steps.
Binding legal requirementAIGN: evidence over explanation
Expected evidence
Annex IV documentation pack, logging design, traceability matrix, retention logic, verification records.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight — logging capabilities, event recording, traceability requirements
Emerging standards pathwayAIGN: build defensibility artifacts
AIGN implementation relevance
This area is central for auditability. Logs and documentation are usually the first weakness exposed in review or incident response.
Art. 11 Art. 12 Art. 19 Annex Z
prEN ISO/IEC DIS 24970
Artificial intelligence — AI system logging
Parallel development
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 13
Transparency and Information to Deployers
Instructions for use. Sufficient transparency for deployers to interpret outputs. System capabilities, limitations, accuracy levels, and known failure modes disclosed.
Binding legal requirementDeployer-readable controls
Expected evidence
Instructions for use, limitations statement, intended purpose, accuracy disclosures, failure-mode communication.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight
Emerging standards pathwayAIGN: enable deployer judgment
AIGN implementation relevance
Treat transparency as an operational control for the deployer, not just as a documentation output.
Art. 13 Annex Z
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 14
Human Oversight
Human-machine interface enabling effective oversight. Measures to prevent automation bias. Ability to override, intervene, stop, and correct AI outputs. Competence requirements for oversight persons.
Binding legal requirementHuman accountability layer
Expected evidence
Oversight procedure, stop/override rights, escalation rules, operator training records, anti-automation-bias measures.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight — human-machine interface, oversight mechanisms, automation bias countermeasures
Emerging standards pathwayAIGN: preserve accountable HITL
AIGN implementation relevance
Human oversight fails when authority, competence and escalation rights are not explicitly designed into operating processes.
Art. 14 Annex Z
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 15
Accuracy, Robustness and Cybersecurity
Appropriate accuracy levels declared in instructions for use. Resilience to errors, faults, misuse. Resistance to adversarial attacks. Feedback loop mitigation. Cybersecurity proportionate to risk.
Binding legal requirementCoordinate with cyber control owners
Expected evidence
Declared performance metrics, robustness testing, misuse scenarios, adversarial testing, cybersecurity control mapping.
prEN 18229-2 Stage 10
AI Trustworthiness Framework
Part 2: Accuracy and Robustness — accuracy metrics, robustness assessment, performance consistency
Emerging standards pathwayAIGN: link metrics to liability thresholds
AIGN implementation relevance
Define performance thresholds that matter for legal, safety and operational decisions — not only model lab metrics.
Art. 15 Annex Z
prEN 18282 Stage 40
Cybersecurity Specifications for AI Systems
AI-specific vulnerability management, adversarial robustness, attack surface analysis
Emerging standards pathwayAIGN: join AI and cyber governance
AIGN implementation relevance
Cybersecurity for AI should be connected to enterprise cyber ownership, vulnerability management and incident governance.
Art. 15 Annex Z
EN ISO/IEC 24029-2:2023
Assessment of robustness of neural networks — Part 2: Formal methods
Parallel development
ISO/IEC DIS 24029-3:—
Assessment of robustness of neural networks — Part 3: Statistical methods
ISO/IEC only
prEN 18281:—
Evaluation methods for accurate computer vision systems
CEN-CENELEC only
prEN ISO/IEC CD 23282:—
Evaluation methods for accurate NLP systems
Parallel development
ISO/IEC CD 4213:— (2nd ed.)
Performance measurement for AI classification, regression, clustering and recommendation
ISO/IEC only
Art. 17
Quality Management System
Documented QMS covering regulatory compliance strategy, design control, development quality assurance, testing procedures, data management, risk management integration, post-market monitoring, incident reporting, and record-keeping.
Binding legal requirementGovernance operating model
Expected evidence
QMS manual, governance roles, approval controls, release procedures, testing workflow, incident and PMS integration.
prEN 18286 Stage 40
Quality Management System for EU AI Act Regulatory Purposes
QMS documentation, compliance procedures, design verification, post-market integration
Emerging standards pathwayAIGN: QMS is governance infrastructure
AIGN implementation relevance
The QMS is where legal compliance, technical controls, accountability and monitoring become one operating system.
Art. 17 Art. 72 Annex Z
prEN 18228
AI Risk Management (Art. 9 integration required by Art. 17(1)(g))
Normative cross-ref
Art. 72
Post-Market Monitoring
Active systematic data collection on system performance throughout lifetime. Post-market monitoring plan as part of technical documentation. Continuous compliance evaluation. Integration with Art. 73 incident reporting.
Binding legal requirementContinuous assurance
Expected evidence
Monitoring plan, performance thresholds, event triggers, issue triage, corrective actions, incident reporting workflow.
prEN 18286 Stage 40
Quality Management System for EU AI Act Regulatory Purposes
Post-market monitoring plan template, performance data collection and analysis requirements
Emerging standards pathwayAIGN: monitoring must reach leadership
AIGN implementation relevance
Post-market monitoring should not remain a technical silo; it must feed governance review, escalation and corrective decision-making.
Art. 17 Art. 72 Annex Z
prEN ISO/IEC DIS 24970
AI system logging — supports post-market data collection
Parallel development
Chapter II · Art. 5
Prohibited AI Practices — No Harmonised Standard Pathway
AIGN note: prohibited practices are not a “comply-through-standardisation” topic. They are threshold admissibility questions. The first governance task is to determine whether a use case is legally impermissible before discussing controls, evidence or optimisation.
Art. 5(1)(a)
Subliminal / Manipulative Techniques
AI systems that deploy techniques beyond a person's consciousness or use purposefully deceptive manipulation to materially distort behaviour causing significant harm.
Art. 5(1)(b)
Exploitation of Vulnerabilities
Systems exploiting age, disability, or social/economic vulnerability to distort behaviour in a manner that causes or is likely to cause significant harm.
Art. 5(1)(c)
Social Scoring
Evaluation or classification of natural persons over time based on social behaviour or personality characteristics leading to detrimental treatment or unjustified harm.
Art. 5(1)(d)
Predictive Policing (Individual)
Risk assessments of natural persons to predict future criminal activity solely based on profiling, except where supported by objective and verifiable facts.
Art. 5(1)(e)
Untargeted Facial Scraping
Creating or expanding facial recognition databases through untargeted scraping of internet or CCTV footage.
Art. 5(1)(f)–(h)
Biometric Categorisation / Real-Time Remote Biometric ID
Biometric categorisation inferring sensitive attributes. Real-time remote biometric identification in public spaces for law enforcement, with narrow exceptions only.
Chapter V · Arts. 51–55
General-Purpose AI (GPAI) Models — Legal Obligations, Operational Thresholds & Evolving Interpretation
All GPAI Providers — Art. 53
Baseline Obligations
Art.53(a)Technical documentation (Annex XI): model architecture, training process, evaluation results — maintained and provided to AI Office on request
Art.53(b)Information for downstream AI system providers (Annex XII): capabilities, limitations, integration guidance
Art.53(c)Copyright compliance policy — identify and respect reservation of rights under Directive (EU) 2019/790 Art. 4(3)
Art.53(d)Public training data summary according to AI Office template
Note: Arts. 53(a) and (b) exempt for free and open-source models (unless systemic risk). AIGN view: even where exemptions apply, downstream governance and deployer-impact questions still remain operationally relevant.
Systemic Risk Models — Art. 55
Additional Obligations · Threshold: > 10²⁵ FLOPs
Art.55(a)Model evaluation per standardised protocols — including adversarial testing, documented results
Art.55(b)Systemic risk assessment and mitigation at Union level — including sources from development, market placement, and use
Art.55(c)Incident tracking, documentation, and reporting without undue delay to AI Office and national authorities
Art.55(d)Adequate cybersecurity protection for the model and its physical infrastructure
Compliance may be demonstrated via Art. 56 codes of practice until harmonised standards are published. Operational interpretation remains dependent on evolving AI Office guidance, codes of practice and secondary clarification.
Classification — Art. 51
Systemic Risk Classification Criteria
A GPAI model is classified as having systemic risk if:

(a) It has high-impact capabilities evaluated on appropriate technical tools and methodologies, including indicators and benchmarks; or

(b) The Commission determines equivalent capabilities ex officio or following a scientific panel alert, per Annex XIII criteria.

Presumption threshold: Cumulative training computation > 10²⁵ floating point operations (FLOPs). This threshold may be updated by delegated act (Art. 97) to reflect algorithmic improvements and hardware efficiency gains.
Transparency — Art. 50
AI Interaction Disclosure
Art.50(1)AI systems interacting with persons must disclose they are AI, unless obvious from context
Art.50(2)Synthetic content (audio, image, video, text) must be machine-readable marked as AI-generated. Interoperable, robust, reliable watermarking required
Art.50(3)Deployers of emotion recognition and biometric categorisation must inform affected persons
Art.50(4)Deepfake content must be disclosed as artificially generated or manipulated
Art. 113 · Entry into Force
Application Timeline — Legal Dates vs Readiness Implications
12 July 2024
Publication
OJ L 2024/1689 — Regulation enters into force +20 days
2 February 2025
Phase 1 applies
Chapters I & II — Definitions and prohibited AI practices (Art. 5)
2 August 2025
Phase 2 applies
GPAI obligations (Chapter V), governance rules, AI Office authority (Chapter VII, XII, Art. 78)
2 August 2026
General application date
All remaining provisions — High-risk AI system requirements (Arts. 8–17, 26, 27, 43, 49, 72, 73). Readiness work should start before this date, not on it.
2 August 2027
Phase 4 applies
Art. 6(1) — High-risk AI systems embedded in regulated products (Annex I, Section B)
TBD 2027+
Standards cited in OJEU
Presumption of conformity activates — harmonised standards referenced in Official Journal
Arts. 40–43 · Conformity Assessment Pathway
Compliance & Conformity Assessment Flow — High-Risk AI Systems, Owners & Evidence
01
Risk Classification
Art. 6 · Annex I, III
Owner: Legal · Compliance · Product
02
Requirements Compliance
Arts. 8–17 · Risk, Data, Logging, Transparency, Oversight, Accuracy, QMS
Owner: Product · Engineering · Risk · Compliance
03
Apply Harmonised Standards
Art. 40 · prEN 18228, 18229-1/2, 18282, 18284, 18286
Owner: Standards lead · Compliance · QA
04
Conformity Assessment
Art. 43 · Internal control (Annex VI) or Notified Body (Annex VII)
Owner: Compliance · QA · Notified Body interface
05
Technical Documentation
Art. 11 · Annex IV — drawn up before market placement
Owner: Product · Engineering · Technical documentation lead
06
EU Declaration of Conformity
Art. 47 · CE marking (Art. 48)
Owner: Authorised sign-off · Compliance
07
Registration
Art. 49 · EU Database (Art. 71) prior to market placement
Owner: Regulatory operations
08
Post-Market Monitoring
Art. 72–73 · Ongoing monitoring, incident reporting within 15 days
Owner: Operations · Risk · Incident management · Leadership oversight
AIGN note: the highest practical failure rate is usually not in understanding the sequence above, but in assigning owners, generating evidence early enough, and keeping provider and deployer responsibilities distinct in real operations.
Working Reference
All Standards — CEN-CENELEC JTC 21 Development Status with AIGN Reading
Standard ID
Title and Scope
Status & Type
prEN 18228
AI Risk Management
Risk management processes for AI systems — Art. 9 coverage
Art. 9Annex Z · Primary
Stage 20  CEN-CENELEC
prEN 18284
Quality and Governance of Datasets in AI
Dataset quality criteria, governance practices, bias examination — Art. 10 coverage
Art. 10Annex Z · Primary
Stage 20  CEN-CENELEC
prEN 18229-1
AI Trustworthiness Framework — Part 1
Logging, transparency and human oversight — Arts. 11–14 coverage
Art. 11–14Annex Z · Primary
Stage 20  CEN-CENELEC
prEN 18229-2
AI Trustworthiness Framework — Part 2
Accuracy and robustness — Art. 15 coverage. Launched for working draft consultation Dec 2025.
Art. 15Annex Z · Primary
Stage 10  CEN-CENELEC
prEN 18282
Cybersecurity Specifications for AI Systems
AI-specific cybersecurity requirements — Art. 15(5) coverage
Art. 15Annex Z · Primary
Stage 40  CEN-CENELEC
prEN 18286
Quality Management System for EU AI Act Regulatory Purposes
QMS and post-market monitoring — Arts. 17, 72 coverage. Source of Annex B mapping.
Art. 17Art. 72Annex Z · Primary
Stage 40  CEN-CENELEC
EN ISO/IEC TS 12791:2024
Treatment of Unwanted Bias in Classification and Regression ML Tasks
Normative reference for bias management — supports Arts. 9, 10
Art. 9Art. 10
Published  Parallel development
EN ISO/IEC 12792:2025
Transparency Taxonomy of AI Systems
Normative reference for transparency requirements — supports Arts. 13, 14
Art. 13Art. 14
Published  Parallel development
EN ISO/IEC 24029-2:2023
Assessment of Robustness of Neural Networks — Part 2: Formal Methods
Normative reference for robustness assessment — supports Art. 15
Art. 15
Published  Parallel development
prEN 18283:—
Concepts, Measures and Requirements for Managing Bias in AI Systems
Updated location confirmed Feb 2026 per JTC 21 newsletter — supports Arts. 9, 10
Art. 9Art. 10
Stage 20  Parallel development
prEN ISO/IEC DIS 24970
Artificial Intelligence — AI System Logging
Supports Art. 12 logging requirements and Art. 72 post-market data collection
Art. 12Art. 72
Stage 40  Parallel development
ISO/IEC DIS 24029-3:—
Assessment of Robustness of Neural Networks — Part 3: Statistical Methods
Robustness assessment via statistical methods — supports Art. 15
Art. 15
Stage 40  ISO/IEC only
ISO/IEC CD 4213:— 2nd ed.
Performance Measurement for AI Classification, Regression, Clustering and Recommendation
Accuracy metrics and performance benchmarks — supports Art. 15
Art. 15
Stage 10  ISO/IEC only
prEN ISO/IEC CD 23282:—
Evaluation Methods for Accurate Natural Language Processing Systems
NLP accuracy evaluation — supports Art. 15 for language AI systems
Art. 15
Stage 10  Parallel development
prEN 18281:—
Evaluation Methods for Accurate Computer Vision Systems
Scope updated March 2026 — progressed to Stage 40. Computer vision accuracy evaluation — supports Art. 15
Art. 15
Stage 40  CEN-CENELEC only
Map Changelog
Date Standard Type Description
2026-03-24 prEN 18281 Scope Updated scopes of prEN 18288 and 18286
2026-03-24 prEN 18281 Stage Status updated to Stage 40 — Enquiry Draft
2026-02-22 Normative refs Info Corrected location of prEN 18283 based on latest JTC 21 information
2026-02-13 Normative refs Info Added normative references based on JTC 21 Inclusiveness newsletter
2026-01-08 Documentation Info Added FAQ section to documentation
2025-12-09 prEN 18229-2 Stage Launched for working draft consultation with national bodies
2025-12-01 All Initial Initial publication based on prEN 18286 Annex B and CEN-CENELEC project scopes