Real opportunities: how private capital can access real estate
Webinar |
Supporting Private Capital Managers
Tailored solutions for the private capital industry.
Spotlight case study
/Passle/MediaLibrary/Images/2026-01-09-17-39-34-188-69613d56ed0bb2914998086e.jpg)
7 minute read
On 3 February 2026, the ICO and Ofcom simultaneously announced investigations into X and xAI concerning Grok’s AI chatbot. With similar investigations under way in the EU (under the Digital Services Act) and in the US (by the California Department of Justice) and Westminster now moving to close loopholes in the online harms regime, one thing is clear: the regulatory landscape for generative AI is changing dramatically and at an unprecedented pace.
The ICO’s investigation is arguably the more straightforward regulatory path. The legal questions - whether personal data has been processed by X and xAI lawfully, fairly and transparently, and whether appropriate safeguards were built into Grok's design and deployment to prevent the generation of harmful manipulated images derived from personal data - are not new, nor is this the ICO’s first potential enforcement action arising from use of AI. William Malcolm, Executive Director of Regulatory Risk and Innovation at the ICO, described the reports as raising "deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent". The potential penalties if X and xAI are found to have contravened the UK GDPR are substantial and well known (up to £17.5m or 4% of annual worldwide turnover, whichever is higher). For the industry, the message is clear: if you are training AI models on personal data or processing personal data to generate images, you need a lawful basis, transparency and robust safeguards.
The Online Safety Act 2023 (OSA) presents a messier picture. Ofcom can investigate X as the platform where harmful imagery spreads, examining whether the platform took adequate steps to assess and mitigate the risk of sexualised AI-generated imagery spreading on its platform and to remove such content swiftly. However, not all chatbot activities fall within the regulatory scope of the OSA and Ofcom is unable to directly investigate xAI - the company actually building the tool that generates the content. This is because the OSA does not regulate chatbots that:
The result is a regulatory gap that lets the image generator escape scrutiny while the distribution platform takes the heat. While Ofcom waits for this legal loophole to be addressed (more on this below) it is considering whether to investigate xAI’s compliance with age verification rules for pornographic content. As with GDPR breaches, the potential penalties if X and/or xAI are found to have contravened the OSA are substantial, with fines of up to £18m or 10% of global annual turnover (whichever is higher) - but until the Government closes the loophole, companies building generative AI tools occupy a regulatory grey zone under the OSA.
In parallel with the ICO and Ofcom investigations, a flurry of legislative activity has reshaped the legal landscape in a matter of weeks. The message is unmistakable: the era of regulatory patience with generative AI is over. While the OSA amends the Sexual Offences Act 2003 (SOA) to introduce new criminal offences relating to online or digital content, new deepfake and intimate images offences have come into force via the Data (Use and Access) Act 2025 (DUAA).
The Crime and Policing Bill, having completed its passage through the House of Commons and currently before the House of Lords, will go further still. This will amend the SOA to introduce offences relating to child sexual abuse material, intimate deepfake images and "nudification" apps (i.e. apps which use generative AI to realistically make it look like a person is nude). The Prime Minister announced on 18 February 2026 that, as part of these measures, technology companies will have to take down abusive images within 48 hours or risk having their services blocked in the UK.
This followed the Prime Minister’s announcement on 16 February 2026 that the Government will table a further amendment to the Crime and Policing Bill to close the legal loophole that has left some AI chatbots outside the scope of the OSA. Under the proposed amendment, all AI chatbot providers (including ChatGPT, Gemini, Microsoft Copilot and Grok) will be required to comply with illegal content duties under the OSA. The Prime Minister emphasised that "no platform gets a free pass". The regulatory loophole that allowed standalone AI chatbots to operate outside the OSA’s reach is about to close. For the industry, this is the clearest signal yet: if you build AI tools that generate content, you will be held accountable for what they produce.
Alongside this announcement, the Government confirmed it will launch a public consultation in March 2026 on children's wellbeing online. This consultation will consider minimum age limits for social media (following Australia's ban on under-16s) restricting features such as infinite scrolling, limiting children's use of AI chatbots, and restricting access to VPNs used to bypass safety systems. More striking still is the pace: the Government has indicated it will seek new legal powers under the Children's Wellbeing and Schools Bill to act swiftly on the consultation's findings - potentially within months, not years.
In light of these rapidly evolving legal and regulatory developments, organisations developing or deploying AI systems - particularly those involving generative AI, chatbots, or image generation capabilities - will need to action the following points.
The Grok investigations are a watershed moment in UK AI regulation and enforcement - not because they are the first AI enforcement actions, but because of what they reveal about the UK’s new regulatory posture. Historically, technology regulation in the UK has moved at a glacial pace. The OSA took over four years from conception to finalisation. The AI chatbot loophole? Identified and addressed in weeks.
The political and policy background here cannot be ignored. The UK elected not to regulate AI via a standalone act (in contrast to the EU) starting instead from a "pro innovation" stance and relying on the existing landscape of sectoral regulatory regimes. Critics have repeatedly argued that a "patchwork" approach risks loopholes, with policymakers insisting that this model enables regulatory agility while supporting innovation and business. The Government's willingness to move quickly here reflects the acute political and public pressure generated by reports of AI-generated intimate imagery of women and children - this has proved to be the issue that broke through the noise, and the Government has responded with a speed that signals a fundamental shift in appetite for addressing emerging technology risks.
For AI providers and platforms, the implications are clear. With amendments to the Crime and Policing Bill expected to proceed swiftly through the House of Lords, and a broader consultation on children's online safety commencing in March, the regulatory environment will tighten considerably in the coming months. The co-ordinated action by the ICO and Ofcom, combined with the Government's clear signal that it will not tolerate regulatory gaps, underscores the need for a proactive, cross-functional approach to compliance that anticipates rather than reacts to regulatory requirements.
If you have any questions or would like help navigating the shifting legal landscape in this area, please contact us.
Stay up to date with our latest insights, events and updates – direct to your inbox.
Browse our people by name, team or area of focus to find the expert that you need.