Insights

X marks the spot: ICO and Ofcom investigate Grok

|

7 minute read

Four weeks. That’s all it took for UK regulators to launch co-ordinated investigations into Grok’s "put her in a bikini" debacle - the AI trend that publicly undressed real women and children without their consent. 

On 3 February 2026, the ICO and Ofcom simultaneously announced investigations into X and xAI concerning Grok’s AI chatbot. With similar investigations under way in the EU (under the Digital Services Act) and in the US (by the California Department of Justice) and Westminster now moving to close loopholes in the online harms regime, one thing is clear: the regulatory landscape for generative AI is changing dramatically and at an unprecedented pace. 

The ICO’s investigation

The ICO’s investigation is arguably the more straightforward regulatory path. The legal questions - whether personal data has been processed by X and xAI lawfully, fairly and transparently, and whether appropriate safeguards were built into Grok's design and deployment to prevent the generation of harmful manipulated images derived from personal data - are not new, nor is this the ICO’s first potential enforcement action arising from use of AI. William Malcolm, Executive Director of Regulatory Risk and Innovation at the ICO, described the reports as raising "deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent". The potential penalties if X and xAI are found to have contravened the UK GDPR are substantial and well known (up to £17.5m or 4% of annual worldwide turnover, whichever is higher). For the industry, the message is clear: if you are training AI models on personal data or processing personal data to generate images, you need a lawful basis, transparency and robust safeguards.

Ofcom’s investigation

The Online Safety Act 2023 (OSA) presents a messier picture. Ofcom can investigate X as the platform where harmful imagery spreads, examining whether the platform took adequate steps to assess and mitigate the risk of sexualised AI-generated imagery spreading on its platform and to remove such content swiftly. However, not all chatbot activities fall within the regulatory scope of the OSA and Ofcom is unable to directly investigate xAI - the company actually building the tool that generates the content. This is because the OSA does not regulate chatbots that:

  • are not user-to-user services;
  • are not search services; and
  • cannot generate pornographic content.

The result is a regulatory gap that lets the image generator escape scrutiny while the distribution platform takes the heat. While Ofcom waits for this legal loophole to be addressed (more on this below) it is considering whether to investigate xAI’s compliance with age verification rules for pornographic content. As with GDPR breaches, the potential penalties if X and/or xAI are found to have contravened the OSA are substantial, with fines of up to £18m or 10% of global annual turnover (whichever is higher) - but until the Government closes the loophole, companies building generative AI tools occupy a regulatory grey zone under the OSA.

Legal developments

In parallel with the ICO and Ofcom investigations, a flurry of legislative activity has reshaped the legal landscape in a matter of weeks. The message is unmistakable: the era of regulatory patience with generative AI is over. While the OSA amends the Sexual Offences Act 2003 (SOA) to introduce new criminal offences relating to online or digital content, new deepfake and intimate images offences have come into force via the Data (Use and Access) Act 2025 (DUAA).

  • DUAA (Commencement No 5) Regulations 2026: brought section 138 of the Data (Use and Access) Act 2025 into force on 6 February 2026, introducing new criminal offences regarding creating, or requesting the creation of, intimate, non-consensual images, including AI-generated images of adults. Anyone building or deploying image generation tools needs to understand that the criminal liability net has widened considerably.
  • DUAA (Commencement No 6 and Transitional and Saving Provisions) Regulations 2026: brought into force on 5 February 2026, requiring data protection by design for services likely to be accessed by children and higher protection standards for children's data. Baking in child safety considerations at the design stage is no longer best practice; it is a legal requirement.

The Crime and Policing Bill, having completed its passage through the House of Commons and currently before the House of Lords, will go further still. This will amend the SOA to introduce offences relating to child sexual abuse material, intimate deepfake images and "nudification" apps (i.e. apps which use generative AI to realistically make it look like a person is nude). The Prime Minister announced on 18 February 2026 that, as part of these measures, technology companies will have to take down abusive images within 48 hours or risk having their services blocked in the UK.

This followed the Prime Minister’s announcement on 16 February 2026 that the Government will table a further amendment to the Crime and Policing Bill to close the legal loophole that has left some AI chatbots outside the scope of the OSA. Under the proposed amendment, all AI chatbot providers (including ChatGPT, Gemini, Microsoft Copilot and Grok) will be required to comply with illegal content duties under the OSA. The Prime Minister emphasised that "no platform gets a free pass". The regulatory loophole that allowed standalone AI chatbots to operate outside the OSA’s reach is about to close. For the industry, this is the clearest signal yet: if you build AI tools that generate content, you will be held accountable for what they produce.

Alongside this announcement, the Government confirmed it will launch a public consultation in March 2026 on children's wellbeing online. This consultation will consider minimum age limits for social media (following Australia's ban on under-16s) restricting features such as infinite scrolling, limiting children's use of AI chatbots, and restricting access to VPNs used to bypass safety systems. More striking still is the pace: the Government has indicated it will seek new legal powers under the Children's Wellbeing and Schools Bill to act swiftly on the consultation's findings - potentially within months, not years.

Practical implications

In light of these rapidly evolving legal and regulatory developments, organisations developing or deploying AI systems - particularly those involving generative AI, chatbots, or image generation capabilities - will need to action the following points.

  • Embed compliance throughout the AI lifecycle: organisations developing or deploying AI systems must ensure compliance with applicable legal requirements from the outset of design processes and continuously monitor compliance. This includes not only data protection obligations under the UK GDPR but also online safety duties under the OSA and the continuous monitoring of “new” risks, including where these relate to new offences such as those concerning intimate images and deepfakes under the DUAA and the Crime and Policing Bill. All this is in addition to the complex landscape of intellectual property related risks and considerations inherent in developing and deploying AI (a topic for another article!).
  • Conduct robust risk assessments: in addition to continuous monitoring, providers of AI services must conduct comprehensive risk assessments that address the requirements of each applicable regime. Such assessments must genuinely identify and mitigate potential harms, particularly those regarding vulnerable groups and children, and should be updated as the legal framework continues to evolve.
  • Monitor and adapt to legislative change: given the pace of reform in this area - as demonstrated by the Government's 16 February announcement that caught many off-guard - organisations should establish processes to monitor legal developments and adapt their compliance programmes accordingly. The Government has signalled its intention to act quickly, and further requirements may emerge from the upcoming consultation on children's wellbeing online. Keeping abreast of legislative reforms and updates to regulations will help ensure continued compliance with changing requirements.
  • Prepare for multi-regulator engagement: co-ordinated regulatory engagement under the DRCF model is expected to increase. The Grok investigations demonstrate that the ICO and Ofcom are willing to act in concert, and organisations should be prepared to respond to enquiries from multiple regulators simultaneously.

Looking ahead

The Grok investigations are a watershed moment in UK AI regulation and enforcement - not because they are the first AI enforcement actions, but because of what they reveal about the UK’s new regulatory posture. Historically, technology regulation in the UK has moved at a glacial pace. The OSA took over four years from conception to finalisation. The AI chatbot loophole? Identified and addressed in weeks.

The political and policy background here cannot be ignored. The UK elected not to regulate AI via a standalone act (in contrast to the EU) starting instead from a "pro innovation" stance and relying on the existing landscape of sectoral regulatory regimes. Critics have repeatedly argued that a "patchwork" approach risks loopholes, with policymakers insisting that this model enables regulatory agility while supporting innovation and business. The Government's willingness to move quickly here reflects the acute political and public pressure generated by reports of AI-generated intimate imagery of women and children - this has proved to be the issue that broke through the noise, and the Government has responded with a speed that signals a fundamental shift in appetite for addressing emerging technology risks.

For AI providers and platforms, the implications are clear. With amendments to the Crime and Policing Bill expected to proceed swiftly through the House of Lords, and a broader consultation on children's online safety commencing in March, the regulatory environment will tighten considerably in the coming months. The co-ordinated action by the ICO and Ofcom, combined with the Government's clear signal that it will not tolerate regulatory gaps, underscores the need for a proactive, cross-functional approach to compliance that anticipates rather than reacts to regulatory requirements.

If you have any questions or would like help navigating the shifting legal landscape in this area, please contact us.

Authors

Related topics

Like what you are reading?

Stay up to date with our latest insights, events and updates – direct to your inbox.

Related insights

How can we help you?

Browse our people by name, team or area of focus to find the expert that you need.