TLDR: The European Union has appointed independent experts to spearhead two working groups tasked with drafting a voluntary code of practice for labeling AI-generated content and deepfakes. This initiative, driven by the EU’s AI Act, aims to ensure transparency and combat misinformation, with legal obligations for marking AI content set to take effect on August 2, 2026.
The European Union has officially commenced the drafting process for a crucial code of practice aimed at ensuring the transparency and clear labeling of AI-generated content. On November 5, 2025, independent experts were appointed to lead two distinct working groups, marking a significant step in the implementation of the EU’s landmark AI Act.
One working group will focus on developing technical solutions for labeling AI-generated content. This group will be chaired by experts with a strong background in technology. The second group will address the complex issue of deepfakes, with its leadership comprising specialists in media studies and law. These appointments were announced following a kick-off plenary session that gathered over 200 participants, including representatives from AI companies, rights holders, experts, and civil society organizations, highlighting the collaborative and multi-stakeholder approach to this initiative.
The impetus for this code of practice stems directly from the EU’s AI Act, which mandates that developers of generative AI systems, such as OpenAI’s ChatGPT and Google’s Gemini, must mark AI-generated or manipulated content in a machine-readable and detectable format. Furthermore, users of these systems who create deepfakes—content impersonating a person’s face, voice, or other personal characteristics—or synthetically generated text intended to inform the public on matters of public interest, are required to disclose the artificial origin of such content, with specific exceptions.
Also Read:
- Google DeepMind Advances AI Content Verification with Expanded SynthID Capabilities and Detector Portal
- Indonesia Forges Ahead with Ethical AI Governance and Innovation
These legal obligations under the AI Act are scheduled to become applicable on August 2, 2026. The European Commission is facilitating the drafting of this voluntary code of practice, which is designed to support industry compliance with these upcoming regulations. The overarching goal is to mitigate risks associated with misinformation, fraud, impersonation, and consumer deception, thereby fostering greater trust within the digital information ecosystem. The European AI Office is overseeing this inclusive, seven-month, stakeholder-driven process to finalize the code, which will support the marking of various synthetic media, including audio, images, video, and text, in formats that enable easy detection.


