TLDR: The EU’s Digital Markets Act (DMA) aims to foster fair digital markets by regulating ‘gatekeepers’ and their Core Platform Services (CPS). While Artificial Intelligence (AI) is not yet a standalone CPS, it falls under the DMA when integrated into existing gatekeeper services. The European Commission is exploring the formal inclusion of AI as a CPS, with a potential decision during the DMA’s first review in 2026. This move reflects growing concerns over how gatekeepers leverage data for AI model development and the broader ethical implications of AI, with significant penalties for non-compliance.
The European Union’s Digital Markets Act (DMA), enacted to ensure fair and contestable markets within the digital sector, imposes stringent obligations on designated ‘gatekeepers’ – large platforms offering Core Platform Services (CPS) such as online search engines, social networks, and operating systems. As of August 18, 2025, Artificial Intelligence (AI) is not formally classified as a standalone Core Platform Service under the DMA. However, the European Commission maintains that AI is indeed covered by the DMA when it is integrated into a gatekeeper’s existing CPS, such as a search engine or virtual assistant utilizing AI functionalities.
The Commission is currently considering a proposal to explicitly add AI to the list of CPS, a decision that could follow a comprehensive market investigation. This potential inclusion is anticipated to be a key agenda item during the DMA’s first review, slated for 2026. The hesitation to formally designate AI as a CPS stems from the inherent complexities in defining what constitutes a ‘gatekeeper’ in the rapidly evolving AI landscape, especially given AI’s frequent embedding within broader service offerings.
A significant concern addressed by the DMA is the potential for gatekeepers to leverage vast amounts of data from their CPS to create barriers to entry for competitors. This principle extends to the use of such data for training and developing AI models, which are the foundational engines driving AI systems like chatbots. If a CPS already incorporates an AI system, such as a generative AI function within a search engine, the European Commission may argue that the gatekeeper’s use of data from this system to strengthen its market position could disadvantage smaller AI innovators.
Furthermore, the DMA mandates that gatekeepers provide business users with free, high-quality, and real-time access to data generated or provided through the use of CPS, including data from their customers. This prevents gatekeepers from using non-public data from business users to compete against them, for instance, by training their own AI sales functions with a marketplace’s sales data.
Non-compliance with the DMA carries substantial penalties. The European Commission has the authority to issue non-compliance decisions, order gatekeepers to cease infringing practices, and impose fines of up to 10% of a company’s global annual turnover. For repeated infringements, daily penalties can reach up to 5% of the average daily turnover. Illustrating the severity of these measures, two gatekeepers were fined €500 million and €200 million in March 2025 for breaching DMA obligations, though these decisions are currently under challenge in EU courts.
Complementing the DMA, the EU AI Act (Regulation (EU) 2024/1689), which entered into force on August 1, 2024, establishes the world’s first comprehensive legal framework for AI. It aims to foster trustworthy AI by setting risk-based rules for AI developers and deployers. The AI Act categorizes AI systems into risk levels: ‘unacceptable risk’ (banned practices like social scoring), ‘high risk’ (posing serious threats to health, safety, or fundamental rights, with strict obligations), ‘limited risk’ (requiring transparency, e.g., informing users they are interacting with an AI), and ‘minimal or no risk’ (no specific rules).
Also Read:
- EU Unveils InvestAI Initiative: €200 Billion Boost to Establish Europe as a Global AI Powerhouse
- The Imperative for a Unified Global AI Governance Framework
The full applicability of the AI Act is phased, with prohibitions and AI literacy obligations effective from February 2, 2025, governance rules and obligations for General-Purpose AI (GPAI) models applicable from August 2, 2025, and rules for high-risk AI systems embedded in regulated products extending until August 2, 2027. Investors are increasingly advised to prioritize companies with robust ethical AI governance, transparent frameworks, and compliance with the EU AI Act to mitigate regulatory, reputational, and financial risks.


