Article 50: Transparency obligations for providers and operators of certain AI systems – EU AI Act

AIGN.GLOBAL · EU AI Act · Article 50

Transparency
Readiness
Article 50

The governance check for organisations using or deploying chatbots, AI-generated content, synthetic media, deepfakes or generative AI workflows (e.g. AI agents, copilot systems).

2 Aug.2026
The transparency obligations of Art. 50 AI Act apply from 2 August 2026. On 7 May 2026, the European Commission published draft guidelines for practical implementation; these are not yet final and are not legally binding. The Regulation itself, future implementing acts and interpretation by competent authorities and courts remain authoritative. Voluntary codes of conduct under Art. 50(7) may provide practical guidance but do not replace the Regulation or its authoritative interpretation. Infringements of transparency obligations under Art. 50 may be sanctioned under Art. 99(4) AI Act with fines of up to EUR 15 million or 3 % of worldwide annual turnover; for SMEs and start-ups the lower threshold applies. The specific sanction depends on the applicable EU fine category, national procedural and jurisdictional rules and the circumstances of the individual case.
What does Art. 50 EU AI Act regulate?

Four transparency obligations – two addressee groups

Whether an organisation is classified as a provider or as a deployer depends on its specific role in the development, integration, deployment, re-branding, placing on the market, putting into service and modification of the system in question. Depending on the circumstances, obligations may apply cumulatively; in particular, re-branding or substantial modifications may in individual cases trigger additional provider obligations – there is no automatism.

Provider · Art. 50(1)

Chatbots & AI Interaction

AI systems intended for direct interaction with natural persons must be designed and developed so that the persons concerned are informed that they are interacting with an AI system – unless this is evident to a reasonably well-informed, observant and prudent person given the circumstances and context of use.

Provider · Art. 50(2)

Machine-Readable Marking

Providers of AI systems, including general-purpose AI systems, insofar as these generate or manipulate synthetic audio, image, video or text content, must ensure that outputs are marked in machine-readable form and detectable as artificially generated or manipulated; the specific technical requirements for implementation require further case-by-case clarification, including through not-yet-final Commission guidelines, technical standards and future enforcement practice.

Deployer · Art. 50(3)

Emotion Recognition & Biometrics

Deployers of emotion recognition or biometric categorisation systems must inform exposed natural persons about the system’s operation; personal data must be processed in accordance with applicable data protection law. Depending on the context of use, additional obligations or restrictions under the AI Act as well as data protection requirements may apply alongside Art. 50 and must be examined separately.

Deployer · Art. 50(4)

Deepfakes & Public Discourse

Deployers of AI systems used to create or manipulate deepfake content must disclose the artificial origin of such content. Deepfakes are AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and could falsely appear authentic or true. Art. 50(4) also covers AI-generated or manipulated text published for the purpose of informing the public on matters of public interest; the notion of „matters of public interest“ is subject to interpretation, is context-dependent and has not yet been definitively clarified by enforcement or judicial practice. The exemption applies where the content has been subject to human review or editorial control and a natural or legal person bears editorial responsibility for the publication. For evidently artistic, creative, satirical, fictional or analogous works a reduced disclosure obligation applies – disclosure must be made in an appropriate manner without unduly impairing the display or enjoyment of the work.

Statutory exceptions (Art. 50(2) and (4))
(1) Purely assistive AI that does not substantially alter the input data or its semantics is exempt (e.g. spell-checking, colour correction). (2) Narrowly defined exceptions apply for legally authorised law enforcement purposes. (3) The draft guidelines address open interpretation and demarcation questions for certain constellations; until finalised, the Regulation text and subsequent enforcement and judicial practice remain authoritative. For evidently artistic, creative, satirical, fictional or analogous works a reduced disclosure obligation applies.

Note – other regulatory requirements: Transparency labels under Art. 50 do not replace a separate assessment of other regulatory requirements, in particular under platform, media, copyright or data protection law.
Note – legal authority: Binding authority rests with the Regulation text, future implementing acts and the interpretation by competent authorities and courts, with the CJEU as the final arbiter.

Assessment areas

What the readiness check covers

1. Role Clarification

  • Provider or deployer?
  • Own system or API integration?
  • Re-branding, placing on the market, substantial modification – provider obligations? No automatism.
  • Check for cumulative applicability
  • Demarcation from Art. 53 (GPAI models)

2. Governance Gaps

  • Missing disclosure / labels
  • Missing machine-readable marking
  • Missing documentation & evidence
  • No approval workflow
  • Vendor governance for external AI services

3. Readiness Output

  • Role and scope classification
  • Evidence checklist
  • Action roadmap
  • Management summary
  • Preparation for Code of Practice & Guidelines

Draft Guidelines and Codes of Conduct (Art. 50(7))
In May 2026, the European Commission published draft guidelines on the implementation of the transparency obligations under Art. 50. Practical interpretation is currently being shaped by these non-binding draft guidelines; voluntary codes of conduct under Art. 50(7) may additionally provide practical guidance. Both instruments facilitate practical implementation but do not replace a case-by-case legal assessment. Binding authority rests with the Regulation text, future implementing acts and the interpretation by competent authorities and courts, with the CJEU as the final arbiter.

Service Packages

AIGN Article 50 Readiness Packages

Quick Check

€1.900+

Initial assessment of 1–3 AI workflows: Art. 50 relevance, role clarification (provider / deployer), short action list.

Readiness Sprint

€7.500+

Structured inventory, gap analysis, evidence checklist, disclosure logic, preparation informed by a voluntary Code of Practice and draft guidelines – without replacing case-by-case legal review. Implementation roadmap.

Vendor / Trust Label Pre-Check

€15.000+

For AI vendors, SaaS providers and platforms required to demonstrate documented, compliance-oriented transparency readiness to enterprise clients (incl. Art. 50(2) machine-readable marking).

Transparency becomes a board-level issue from August 2026.

AIGN translates Art. 50 into operational governance, evidence and documented, compliance-oriented implementation – for deployers and providers alike. Case-by-case legal review included.

Contact: upmann@now.digital