Transparency of AI-generated content: the EU's first draft Code of Practice
26 January 2026
26 January 2026
As the pace of generative AI accelerates, it seems that the risk of harm - from issues such as misinformation and deepfakes – grows too. The EU's draft Code of Practice on Transparency of AI-Generated Content (draft Code) aims to address this with specific operational measures which signpost the principal actors in the AI value chain towards compliance with the transparency requirements of the EU AI Act (Act). Key measures include:
We explore each of these in more detail below.
The draft Code is a major step in converting the high-level transparency requirements set out in Article 50 of the Act into firm operational expectations for providers and deployers of generative AI systems. Prepared following significant stakeholder involvement, it represents the first structured roadmap on how organisations are expected to approach the marking and detection of AI-generated or manipulated content, as well as the labelling of deepfakes, going forward. The transparency obligations in Article 50 of the Act come into force on 2 August 2026. As the deadline approaches, following the steps set out in the draft Code is likely to become a significant benchmark to measure compliance in practice.
The draft Code applies to 'providers' (those placing Gen AI systems on the market) and 'deployers' (entities using GenAI systems to generate or publish content). The two main sections of the draft Code allocate distinct, but related, responsibilities to each.
Section 1 of the draft Code is directed at providers of generative AI systems, including General Purpose AI (GPAI) models. Providers must ensure that audio, image, video, or text outputs are machine-readable and detectable as artificially generated. Specifically, the Code sets out four key technical requirements to show compliance with Articles 50(2) and 50(5) of the Act:
The draft Code explicitly prohibits providers from relying on a single marking technique. Instead, they must implement a "multi-layered" approach involving:
a. Metadata Embedding: Inserting machine-readable provenance information and digital signatures directly into the file of a piece of content (such as an image, audio, or video file) to identify it as AI-generated;
b. Imperceptible Watermarking: Providers may embed watermarks during model training, model inference, or within the output of an AI model or system; and
c. Fingerprinting/Logging: Where metadata or watermarking techniques might fail, providers will need to establish and maintain fingerprinting of the AI-generated or manipulated content or logging facilities that allow outputs to be checked.
Marking alone is not sufficient. Providers should also enable detection of AI-generated or manipulated content. The draft Code expects providers to offer a free-of-charge interface (e.g. API or user interface) or a publicly available tool so that users and third parties can verify whether content is AI-generated.
The technical solutions used for marking and detection must be effective, reliable, tamper-resistant and robust, as well as interoperable across platforms and tools. Providers are expected to take into account content-specific limitations, implementation costs and what is generally acknowledged as state of the art.
Providers should continuously test and update solutions against real-world conditions and evolving threats, document and remediate failures, train relevant staff, and cooperate with authorities by providing information and system access to demonstrate compliance.
Section 2 of the draft Code targets deployers of an AI system that generate deepfakes or text intended to inform the public on matters of public interest.
The draft Code aims to establish a harmonised, shared vocabulary and classification system so that AI-generated or manipulated content is consistently and transparently identified across different platforms and services. However, content itself is divided into two categories:
The draft Code also proposes a standardised EU-wide icon (currently an "AI" visual) to indicate synthetic content.
For content qualifying as a deepfake, the draft Code takes a much stricter approach. Deployers must ensure clear and prominent disclosure from the start, using modality-specific methods such as:
Disclosure obligations apply irrespective of the purpose of the deepfake, subject only to limited exemptions in Article 50(4) of the Act for law enforcement purposes or obviously artistic or fictional works. In practice, this means deployers will need robust internal processes to identify when content qualifies as a deepfake and to label it consistently and at scale.
For AI-generated or manipulated text on matters of public interest, specific disclosure measures are mandatory unless the content has undergone human review and is subject to editorial control.
The draft Code's labelling and disclosure rules are supplemented by an emphasis on internal governance. Deployers are expected to put in place a broad range of internal compliance processes, training and monitoring arrangements. Examples include:
The draft Code clearly shows the regulatory direction of travel on AI transparency, and what 'good compliance' under Article 50 of the AI Act is likely to entail in practice. It is expected to become a central reference point for regulators and market participants alike.
Although the final text of the Code may change, the underlying message is consistent: AI-generated and manipulated content will need to be clearly identifiable, and organisations will be expected to demonstrate proactive, well-documented efforts to meet that standard. For businesses operating across both the UK and EU, early engagement with these requirements will be critical to managing regulatory risk and maintaining trust in AI-enabled products and services.
Not all stakeholders support the current draft Code. The creative sector harbours concerns that fundamental risks are not fully addressed, whereas small developers are nervous about compliance costs. As the draft Code is subject to further consultation and refinement, changes are possible, so businesses within its scope – and their legal teams - should track developments closely.
A second draft is expected in March 2026. The final version, slated for June 2026, is likely to be endorsed by the Commission, inform regulatory interpretation of Article 50, and shape emerging best practice on AI transparency.
More broadly, the draft Code signals a clear shift toward a more transparent and verifiable AI information ecosystem. Early engagement will put organisations in a stronger position to manage regulatory risk and maintain user trust ahead of the August 2026 deadline.
Other Authors: Nilesh Ray, Associate
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.