AI tools under the antitrust spotlight, as Commission opens abuse of dominance cases into both Meta and Google
19 December 2025The European Commission has opened twin investigations into Meta and Google’s conduct in relation to generative AI assistants and related services. Whilst both cases are at a very early stage, they indicate a broadening of the scope of the potential competition concerns the Commission is likely to examine in AI markets – which the Digital Markets Act’s (DMA) static rulebook appears ill-equipped to address.
What the Commission is investigating
The investigations into each of Meta and Google are entirely separate and self-standing. Nevertheless, the proximity of their announcements was striking, with the Commission’s announcement of the Meta investigation coming on 4 December 2025, and the announcement in relation to Google following only five days later, on 9 December. This proximity appears more than co-incidental.
Meta: access for AI providers to WhatsApp
The Commission’s investigation into Meta’s conduct concerns a new policy, announced in October 2025, prohibiting businesses whose primary service is a general-purpose AI assistant from using the WhatsApp Business Solution (WBS – a tool that allows businesses to integrate with WhatsApp in order to communicate with their customers). Where AI is used for ancillary functions (e.g. customer support), WBS can still be used.
The Commission is concerned this policy will prevent third-party AI developers from reaching EU users via WhatsApp while Meta’s own assistant (Meta AI) remains directly available inside the platform, which could see Meta leverage its dominance in social media and instant messaging in order to foreclose innovative AI competitors. Interestingly, the Commission has also indicated that it is considering imposing interim measures under Article 8 of Regulation 1/2003, something it has previously done in only one case.
Google: AI Overviews, AI Mode and the use of third-party content
As regards Google, the Commission’s concerns are two-pronged.
First, it is concerned that Google’s use of web publisher content to provide AI-powered services on its search results pages (AI Overviews and AI Mode), without compensation and without a possibility to refuse such use, amounts to an imposition of unfair terms and conditions on publishers.
Second, the Commission believes Google may have used content uploaded on YouTube to train its generative AI models. Again, without compensation to the relevant content producers or a possibility to refuse such usage, which may amount to an imposition of unfair trading conditions. At the same time, rival developers are barred from using YouTube content to train their AI models; potentially placing them at a competitive disadvantage to Google and distorting competition on downstream markets.
The broader context
Over the past few years, competition authorities have been proactive in exploring and highlighting the risks to competition that might stem from the rapid rise of generative AI technologies. For example:
- The Commission launched a consultation on competition in virtual worlds and generative AI in January 2024. This resulted in a September 2024 Policy Brief, which outlined factors impacting competition in generative AI markets, possible approaches to assessing competition in such markets, and theories of harm that might materialise.
- In July 2024, the Commission, the CMA, and the US Department of Justice and Federal Trade Commission issued a joint statement on competition in generative AI foundation models and AI products. This highlighted many of the same risks, and articulated three principles for protecting competition in the AI ecosystem: fair dealing, interoperability and choice.
- Numerous investments by Big Tech firms into generative AI developers, and strategic partnerships between them, have been investigated through a merger control lens. The CMA has been most active in this regard, given the greater flexibility it enjoys to investigate acquisitions falling short of control, but the Commission also formally investigated (and approved) Nvidia’s acquisition of Run:AI, despite it falling under the EU’s merger control thresholds, following a referral up from the Italian competition authority.
That antitrust cases have now been opened in this increasingly important sector is, therefore, not altogether surprising. However, it should be noted that they follow a period during which such issues have featured rather less prominently in policy discussions than they did in 2024.
Key takeaways
With both cases at very early stages, and few details contained in the Commission’s press releases, their legal implications remain to be seen. The Commission has also recently demonstrated a willingness to quickly settle abuse of dominance cases through commitments (in which no breach of Article 102 TFEU is admitted), so it may be that few concrete principles will emanate from these cases. Nevertheless, some early observations can already be made.
A shift towards exploitative theories of harm
The Commission’s Policy Brief outlined five possible types of competition risk in AI-related markets. These covered a range of issues, both vertical and horizontal, but particular prominence was given to risks emanating from the control of upstream inputs (such as computing capacity, data and human talent) – with strategic investments and partnerships between Big Tech firms and small AI companies amplifying such risks. The possibility of large players with AI foundation model offerings using their market power to foreclose downstream competitors was also explored.
These risks are reflected in both the Meta and Google cases, insofar as they relate to concerns that the former is self-preferencing its own AI chatbot on WhatsApp, and the latter is denying its AI rivals access to video and audio data for training purposes, placing them at a disadvantage. These concerns also reflect well-established theories of harm, including those stemming from previous antitrust cases against Google.
However, the focus on unfair trading conditions in the Google case is more novel, and was not signposted in the Policy Brief. In fact, beyond mentioning the words “unfair trading conditions”, the Policy Brief does not explore the risk of exploitative abuse at all. The Commission’s pivot in this regard can perhaps partly be explained by significant concerns raised by publishers, in various fora, regarding the uncompensated use of their content and the attendant risk of traffic being diverted away from them.
Enforcement of Article 102 TFEU, rather than the DMA
Considering that both Meta and Google are “gatekeepers” under the DMA, and both WhatsApp and Google Search are designated “core platform services” subject to the DMA’s obligations, it is notable that these investigations are being run as antitrust, not DMA, cases.
Although DMA enforcement has produced fewer final infringement decisions than some might have expected by now, it carries significant procedural advantages. These include a streamlined administrative procedure that can deliver more rapid results, and avoiding complex economic arguments on points such as market definition and exclusionary effects. The potential penalties for DMA infringement are also the same as for antitrust violations.
The decision to open Article 102 investigations into Meta and Google is therefore likely explained by the fact that not all the conduct of interest could be made to fit squarely within the prohibited forms of conduct in DMA Articles 5 or 6. For example, in relation to Google’s conduct, Article 6(12) DMA specifies that gatekeepers must apply fair, reasonable and non-discriminatory conditions of access to business users, but this applies only to app stores, social networking services and online search engines – not to video sharing platforms such as YouTube.
These cases therefore serve to highlight the inflexibility inherent in the DMA’s detailed and pre-determined lists of prohibited conduct – in contrast to the more bespoke UK digital markets competition regime (DMCR).
The approach in the UK
As explored in our previous article, the first firms with “strategic market status” (SMS) were designated by the CMA in October 2025. Google in particular was designated both in respect of its mobile platform, comprising Android and related products, and its general search platform, covering both its user-facing and advertiser-facing functions. Designation under the DMCR is, however, only the beginning of the story: the CMA must now propose and consult upon binding conduct requirements (CRs), to which the SMS firms will be subject.
The Google general search designation expressly covers both Google AI Overviews and AI Mode. And the CMA indicated in June 2025 that in its CR-setting process it intends to prioritise measures to ensure publishers have transparency and control in respect of how their content, when collected by Google search, is used and attributed in AI-generated responses.
The CMA therefore appears to be alive to some of the concerns underpinning the Commission’s investigations, and likely to take at least some steps to address them. That said, it should be noted that YouTube sits outside the scope of the SMS designations and Meta has, to date, escaped investigation under the DMCR entirely. The CMA’s appetite to take on further Big Tech antitrust cases also remains questionable, given its Digital Markets Unit (which leads its online-related enforcement activities) has already had to slow down its deployment of the DMCR (the CMA shelved its plan to launch a further SMS investigation in 2025).
A willingness to confront Big Tech amid geopolitical noise
There has been speculation that EU competition enforcement might be tempered by transatlantic tensions, as the current US administration continues to accuse the EU bodies of unfairly targeting US tech companies. However, September’s near-€3bn fine in the Google AdTech case, and now the opening of these new cases against firms that have only very recently been on the receiving end of large fines, suggest that the Commission remains determined to press ahead where it sees risks of consumer harm, loss of choice, and impediments to innovation from exclusionary or exploitative practices.
Get in touch