Legal development

Transparency of AI-generated content: the EU's first draft Code of Practice

red dots pattern

    As the pace of generative AI accelerates, it seems that the risk of harm - from issues such as misinformation and deepfakes – grows too. The EU's draft Code of Practice on Transparency of AI-Generated Content (draft Code) aims to address this with specific operational measures which signpost the principal actors in the AI value chain towards compliance with the transparency requirements of the EU AI Act (Act). Key measures include:

    • Requirements for marking, detection and disclosure
    • Specific rules on deepfakes and matters of public interest
    • Internal governance, monitoring and training.

    We explore each of these in more detail below.

    Overview

    The draft Code is a major step in converting the high-level transparency requirements set out in Article 50 of the Act into firm operational expectations for providers and deployers of generative AI systems. Prepared following significant stakeholder involvement, it represents the first structured roadmap on how organisations are expected to approach the marking and detection of AI-generated or manipulated content, as well as the labelling of deepfakes, going forward. The transparency obligations in Article 50 of the Act come into force on 2 August 2026. As the deadline approaches, following the steps set out in the draft Code is likely to become a significant benchmark to measure compliance in practice.

    Who's covered?

    The draft Code applies to 'providers' (those placing Gen AI systems on the market) and 'deployers' (entities using GenAI systems to generate or publish content). The two main sections of the draft Code allocate distinct, but related, responsibilities to each.

    Providers: 'marking' and 'detection' of AI-generated and manipulated content

    Section 1 of the draft Code is directed at providers of generative AI systems, including General Purpose AI (GPAI) models. Providers must ensure that audio, image, video, or text outputs are machine-readable and detectable as artificially generated. Specifically, the Code sets out four key technical requirements to show compliance with Articles 50(2) and 50(5) of the Act:

    1. Multi-Layered Marking

    The draft Code explicitly prohibits providers from relying on a single marking technique. Instead, they must implement a "multi-layered" approach involving:

    a. Metadata Embedding: Inserting machine-readable provenance information and digital signatures directly into the file of a piece of content (such as an image, audio, or video file) to identify it as AI-generated;

    b. Imperceptible Watermarking: Providers may embed watermarks during model training, model inference, or within the output of an AI model or system; and

    c. Fingerprinting/Logging: Where metadata or watermarking techniques might fail, providers will need to establish and maintain fingerprinting of the AI-generated or manipulated content or logging facilities that allow outputs to be checked.

    2. Detection

    Marking alone is not sufficient. Providers should also enable detection of AI-generated or manipulated content. The draft Code expects providers to offer a free-of-charge interface (e.g. API or user interface) or a publicly available tool so that users and third parties can verify whether content is AI-generated.

    3. Technical quality requirements

    The technical solutions used for marking and detection must be effective, reliable, tamper-resistant and robust, as well as interoperable across platforms and tools. Providers are expected to take into account content-specific limitations, implementation costs and what is generally acknowledged as state of the art.

    4. Testing, verification and compliance

    Providers should continuously test and update solutions against real-world conditions and evolving threats, document and remediate failures, train relevant staff, and cooperate with authorities by providing information and system access to demonstrate compliance.

    Deployers: Rules for labelling of deepfakes and AI-generated and manipulated text

    Section 2 of the draft Code targets deployers of an AI system that generate deepfakes or text intended to inform the public on matters of public interest.

    A common taxonomy

    The draft Code aims to establish a harmonised, shared vocabulary and classification system so that AI-generated or manipulated content is consistently and transparently identified across different platforms and services. However, content itself is divided into two categories:

    • Fully AI-generated content: content autonomously generated without any human-authored authentic input (e.g., fully AI-generated images, video or audio based on prompts to the system; AI-generated books, articles or other content on matters of public interest); and
    • AI-assisted content: Content with a combination of human and AI involvement, such as object removal from images, face/voice replacement or modification, or AI-generated text that imitates the style of a specific person.

    The draft Code also proposes a standardised EU-wide icon (currently an "AI" visual) to indicate synthetic content.

    Deepfakes: stricter disclosure rules

    For content qualifying as a deepfake, the draft Code takes a much stricter approach. Deployers must ensure clear and prominent disclosure from the start, using modality-specific methods such as:

    • Persistent visual indicators and opening disclaimers for live video;
    • Visible labels or disclaimers for recorded video and images; and
    • Audible disclaimers for audio content.

    Disclosure obligations apply irrespective of the purpose of the deepfake, subject only to limited exemptions in Article 50(4) of the Act for law enforcement purposes or obviously artistic or fictional works. In practice, this means deployers will need robust internal processes to identify when content qualifies as a deepfake and to label it consistently and at scale.

    AI-generated text on matters of public interest

    For AI-generated or manipulated text on matters of public interest, specific disclosure measures are mandatory unless the content has undergone human review and is subject to editorial control.

    Compliance, training and cooperation

    The draft Code's labelling and disclosure rules are supplemented by an emphasis on internal governance. Deployers are expected to put in place a broad range of internal compliance processes, training and monitoring arrangements. Examples include:

    • Training staff and content moderators on identifying AI-generated and manipulated content;
    • Human oversight of labelling decisions for deepfakes and AI generated/manipulated text (automation alone is not sufficient);
    • Mechanisms for users or third parties to flag mis-labelled or unlabelled content; and
    • Cooperation with market surveillance authorities, and other third parties, including media regulators, and providers of VLOPs/VLOSEs.

    Practical lessons for providers and deployers

    The draft Code clearly shows the regulatory direction of travel on AI transparency, and what 'good compliance' under Article 50 of the AI Act is likely to entail in practice. It is expected to become a central reference point for regulators and market participants alike.

    • Transparency will be both technical and organisational: Providers and deployers will be expected to combine technical solutions with robust governance, training and documentation to demonstrate that deepfakes, as well as AI-generated and manipulated content, are clearly identifiable. Providers should begin assessing whether their current systems can support multi-layered marking and reliable detection at scale, while deployers should review content workflows, labelling practices and user-facing disclosures.
    • Providers should prepare for multi-layered technical obligations. Existing watermarking, provenance and detection capabilities should be integrated into providers' generative AI models and assessed against the draft Code’s expectation of interoperable, resilient and scalable solutions. For many providers, this will require further investment and early engagement with emerging technical standards.
    • Deployers should map disclosure obligations across content workflows. Organisations using generative AI for public-facing content should identify where labelling and disclosure duties apply, where exemptions may be available, and how these can be operationalised. Human review is likely to remain a critical compliance tool, particularly where deployers rely on editorial exemptions or proportionate disclosure for creative works.
    • Alignment with the Digital Services Act (DSA) will be essential. Platforms that act both as deployers and online intermediaries will need to consider Article 50 obligations alongside DSA requirements on risk assessments, content moderation, user transparency and deepfake labelling, to ensure a coherent and defensible compliance approach.
    • The Code will shape future standards and enforcement. Although voluntary, the Code is expected to influence technical standard-setting, inform what regulators view as “appropriate measures”, and serve as a benchmark in future enforcement—especially where organisations rely on bespoke or non-standard transparency mechanisms.

    Although the final text of the Code may change, the underlying message is consistent: AI-generated and manipulated content will need to be clearly identifiable, and organisations will be expected to demonstrate proactive, well-documented efforts to meet that standard. For businesses operating across both the UK and EU, early engagement with these requirements will be critical to managing regulatory risk and maintaining trust in AI-enabled products and services.

    What happens next?

    Not all stakeholders support the current draft Code. The creative sector harbours concerns that fundamental risks are not fully addressed, whereas small developers are nervous about compliance costs. As the draft Code is subject to further consultation and refinement, changes are possible, so businesses within its scope – and their legal teams - should track developments closely.

    A second draft is expected in March 2026. The final version, slated for June 2026, is likely to be endorsed by the Commission, inform regulatory interpretation of Article 50, and shape emerging best practice on AI transparency.

    More broadly, the draft Code signals a clear shift toward a more transparent and verifiable AI information ecosystem. Early engagement will put organisations in a stronger position to manage regulatory risk and maintain user trust ahead of the August 2026 deadline.

    Other Authors: Nilesh Ray, Associate

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.