EU AI Act Article · legal obligation
Primary Standard · emerging pathway + AIGN interpretation
Normative References · supporting material
Art. 9
Legal obligation
Risk Management System
Continuous iterative process across the full AI lifecycle. Identification, estimation, evaluation and mitigation of risks to health, safety, fundamental rights.
Binding legal requirementBuild before harmonisation
Expected evidence
Documented risk methodology, risk register, mitigation records, residual-risk decisions, review cadence.
prEN 18228 Stage 20
AI Risk Management
Emerging standard + AIGN interpretation
Risk identification, estimation, evaluation and treatment for AI systems throughout lifecycle
AIGN implementation relevance
Use this as a design reference for lifecycle risk controls, not as a substitute for legal classification or accountability decisions.
Emerging standards pathwayAIGN: operationalise lifecycle accountability
Art. 9
Annex Z
prEN 18283:—
Concepts, measures and requirements for managing bias in AI systems
Parallel development
EN ISO/IEC TS 12791:2024
Treatment of unwanted bias in classification and regression ML tasks
Parallel development
Art. 10
Data and Data Governance
Training, validation and testing data quality. Data governance practices, bias examination, representativeness, statistical properties, data gap identification.
Binding legal requirementData governance cannot wait
Expected evidence
Dataset inventory, provenance records, representativeness checks, bias testing, data quality criteria, gap remediation log.
prEN 18284 Stage 20
Quality and Governance of Datasets in AI
Data quality criteria, governance practices, bias detection, dataset representativeness and completeness
Emerging standards pathwayAIGN: make deployer-impact visible
AIGN implementation relevance
Most organisations need a data-governance decision layer linking datasets, intended purpose, affected populations and residual bias acceptance.
Art. 10
Annex Z
prEN 18283:—
Concepts, measures and requirements for managing bias in AI systems
Parallel development
EN ISO/IEC TS 12791:2024
Treatment of unwanted bias in classification and regression ML tasks
Parallel development
Art. 11 · 12
Technical Documentation & Record-Keeping
Pre-market technical documentation (Annex IV). Automatic logging of events over system lifetime. Traceability of inputs, outputs, decisions and human verification steps.
Binding legal requirementAIGN: evidence over explanation
Expected evidence
Annex IV documentation pack, logging design, traceability matrix, retention logic, verification records.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight — logging capabilities, event recording, traceability requirements
Emerging standards pathwayAIGN: build defensibility artifacts
AIGN implementation relevance
This area is central for auditability. Logs and documentation are usually the first weakness exposed in review or incident response.
Art. 11
Art. 12
Art. 19
Annex Z
prEN ISO/IEC DIS 24970
Artificial intelligence — AI system logging
Parallel development
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 13
Transparency and Information to Deployers
Instructions for use. Sufficient transparency for deployers to interpret outputs. System capabilities, limitations, accuracy levels, and known failure modes disclosed.
Binding legal requirementDeployer-readable controls
Expected evidence
Instructions for use, limitations statement, intended purpose, accuracy disclosures, failure-mode communication.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight
Emerging standards pathwayAIGN: enable deployer judgment
AIGN implementation relevance
Treat transparency as an operational control for the deployer, not just as a documentation output.
Art. 13
Annex Z
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 14
Human Oversight
Human-machine interface enabling effective oversight. Measures to prevent automation bias. Ability to override, intervene, stop, and correct AI outputs. Competence requirements for oversight persons.
Binding legal requirementHuman accountability layer
Expected evidence
Oversight procedure, stop/override rights, escalation rules, operator training records, anti-automation-bias measures.
prEN 18229-1 Stage 20
AI Trustworthiness Framework
Part 1: Logging, Transparency and Human Oversight — human-machine interface, oversight mechanisms, automation bias countermeasures
Emerging standards pathwayAIGN: preserve accountable HITL
AIGN implementation relevance
Human oversight fails when authority, competence and escalation rights are not explicitly designed into operating processes.
Art. 14
Annex Z
EN ISO/IEC 12792:2025
Transparency taxonomy of AI systems
Parallel development
Art. 15
Accuracy, Robustness and Cybersecurity
Appropriate accuracy levels declared in instructions for use. Resilience to errors, faults, misuse. Resistance to adversarial attacks. Feedback loop mitigation. Cybersecurity proportionate to risk.
Binding legal requirementCoordinate with cyber control owners
Expected evidence
Declared performance metrics, robustness testing, misuse scenarios, adversarial testing, cybersecurity control mapping.
prEN 18229-2 Stage 10
AI Trustworthiness Framework
Part 2: Accuracy and Robustness — accuracy metrics, robustness assessment, performance consistency
Emerging standards pathwayAIGN: link metrics to liability thresholds
AIGN implementation relevance
Define performance thresholds that matter for legal, safety and operational decisions — not only model lab metrics.
Art. 15
Annex Z
prEN 18282 Stage 40
Cybersecurity Specifications for AI Systems
AI-specific vulnerability management, adversarial robustness, attack surface analysis
Emerging standards pathwayAIGN: join AI and cyber governance
AIGN implementation relevance
Cybersecurity for AI should be connected to enterprise cyber ownership, vulnerability management and incident governance.
Art. 15
Annex Z
EN ISO/IEC 24029-2:2023
Assessment of robustness of neural networks — Part 2: Formal methods
Parallel development
ISO/IEC DIS 24029-3:—
Assessment of robustness of neural networks — Part 3: Statistical methods
ISO/IEC only
prEN 18281:—
Evaluation methods for accurate computer vision systems
CEN-CENELEC only
prEN ISO/IEC CD 23282:—
Evaluation methods for accurate NLP systems
Parallel development
ISO/IEC CD 4213:— (2nd ed.)
Performance measurement for AI classification, regression, clustering and recommendation
ISO/IEC only
Art. 17
Quality Management System
Documented QMS covering regulatory compliance strategy, design control, development quality assurance, testing procedures, data management, risk management integration, post-market monitoring, incident reporting, and record-keeping.
Binding legal requirementGovernance operating model
Expected evidence
QMS manual, governance roles, approval controls, release procedures, testing workflow, incident and PMS integration.
prEN 18286 Stage 40
Quality Management System for EU AI Act Regulatory Purposes
QMS documentation, compliance procedures, design verification, post-market integration
Emerging standards pathwayAIGN: QMS is governance infrastructure
AIGN implementation relevance
The QMS is where legal compliance, technical controls, accountability and monitoring become one operating system.
Art. 17
Art. 72
Annex Z
prEN 18228
AI Risk Management (Art. 9 integration required by Art. 17(1)(g))
Normative cross-ref
Art. 72
Post-Market Monitoring
Active systematic data collection on system performance throughout lifetime. Post-market monitoring plan as part of technical documentation. Continuous compliance evaluation. Integration with Art. 73 incident reporting.
Binding legal requirementContinuous assurance
Expected evidence
Monitoring plan, performance thresholds, event triggers, issue triage, corrective actions, incident reporting workflow.
prEN 18286 Stage 40
Quality Management System for EU AI Act Regulatory Purposes
Post-market monitoring plan template, performance data collection and analysis requirements
Emerging standards pathwayAIGN: monitoring must reach leadership
AIGN implementation relevance
Post-market monitoring should not remain a technical silo; it must feed governance review, escalation and corrective decision-making.
Art. 17
Art. 72
Annex Z
prEN ISO/IEC DIS 24970
AI system logging — supports post-market data collection
Parallel development
Art. 5(1)(a)
Subliminal / Manipulative Techniques
AI systems that deploy techniques beyond a person's consciousness or use purposefully deceptive manipulation to materially distort behaviour causing significant harm.
Art. 5(1)(b)
Exploitation of Vulnerabilities
Systems exploiting age, disability, or social/economic vulnerability to distort behaviour in a manner that causes or is likely to cause significant harm.
Art. 5(1)(c)
Social Scoring
Evaluation or classification of natural persons over time based on social behaviour or personality characteristics leading to detrimental treatment or unjustified harm.
Art. 5(1)(d)
Predictive Policing (Individual)
Risk assessments of natural persons to predict future criminal activity solely based on profiling, except where supported by objective and verifiable facts.
Art. 5(1)(e)
Untargeted Facial Scraping
Creating or expanding facial recognition databases through untargeted scraping of internet or CCTV footage.
Art. 5(1)(f)–(h)
Biometric Categorisation / Real-Time Remote Biometric ID
Biometric categorisation inferring sensitive attributes. Real-time remote biometric identification in public spaces for law enforcement, with narrow exceptions only.
AIGN note: the highest practical failure rate is usually not in understanding the sequence above, but in assigning owners, generating evidence early enough, and keeping provider and deployer responsibilities distinct in real operations.
Standard ID
Title and Scope
Status & Type
AI Risk Management
Risk management processes for AI systems — Art. 9 coverage
Art. 9Annex Z · Primary
Stage 20
CEN-CENELEC
Quality and Governance of Datasets in AI
Dataset quality criteria, governance practices, bias examination — Art. 10 coverage
Art. 10Annex Z · Primary
Stage 20
CEN-CENELEC
AI Trustworthiness Framework — Part 1
Logging, transparency and human oversight — Arts. 11–14 coverage
Art. 11–14Annex Z · Primary
Stage 20
CEN-CENELEC
AI Trustworthiness Framework — Part 2
Accuracy and robustness — Art. 15 coverage. Launched for working draft consultation Dec 2025.
Art. 15Annex Z · Primary
Stage 10
CEN-CENELEC
Cybersecurity Specifications for AI Systems
AI-specific cybersecurity requirements — Art. 15(5) coverage
Art. 15Annex Z · Primary
Stage 40
CEN-CENELEC
Quality Management System for EU AI Act Regulatory Purposes
QMS and post-market monitoring — Arts. 17, 72 coverage. Source of Annex B mapping.
Art. 17Art. 72Annex Z · Primary
Stage 40
CEN-CENELEC
Treatment of Unwanted Bias in Classification and Regression ML Tasks
Normative reference for bias management — supports Arts. 9, 10
Art. 9Art. 10
Published
Parallel development
Transparency Taxonomy of AI Systems
Normative reference for transparency requirements — supports Arts. 13, 14
Art. 13Art. 14
Published
Parallel development
Assessment of Robustness of Neural Networks — Part 2: Formal Methods
Normative reference for robustness assessment — supports Art. 15
Art. 15
Published
Parallel development
Concepts, Measures and Requirements for Managing Bias in AI Systems
Updated location confirmed Feb 2026 per JTC 21 newsletter — supports Arts. 9, 10
Art. 9Art. 10
Stage 20
Parallel development
Artificial Intelligence — AI System Logging
Supports Art. 12 logging requirements and Art. 72 post-market data collection
Art. 12Art. 72
Stage 40
Parallel development
Assessment of Robustness of Neural Networks — Part 3: Statistical Methods
Robustness assessment via statistical methods — supports Art. 15
Art. 15
Stage 40
ISO/IEC only
ISO/IEC CD 4213:— 2nd ed.
Performance Measurement for AI Classification, Regression, Clustering and Recommendation
Accuracy metrics and performance benchmarks — supports Art. 15
Art. 15
Stage 10
ISO/IEC only
Evaluation Methods for Accurate Natural Language Processing Systems
NLP accuracy evaluation — supports Art. 15 for language AI systems
Art. 15
Stage 10
Parallel development
Evaluation Methods for Accurate Computer Vision Systems
Scope updated March 2026 — progressed to Stage 40. Computer vision accuracy evaluation — supports Art. 15
Art. 15
Stage 40
CEN-CENELEC only