Ofcom Launches Investigation Into Grok AI Amid Deepfake Image Controversy

Ofcom Launches Investigation Into Grok AI Amid Deepfake Image Controversy

Britain’s communications regulator Ofcom has launched a formal investigation into Grok AI, the generative artificial intelligence tool developed by Elon Musk’s xAI and integrated into the social media platform X (formerly Twitter), following a growing public, political, and regulatory backlash over the tool’s misuse to create sexualised and non-consensual AI images of women and children. The Guardian+1

The probe centres on whether X and xAI have breached their legal duties under the UK Online Safety Act by allowing Grok’s image generation and editing features to be exploited to produce explicit and harmful content. Ofcom has made urgent contact with X and xAI to assess the platform’s compliance with its obligations to protect users in the United Kingdom and determine if enforcement action, including fines or even blocking access to the service in the UK, is warranted. coastfm.co.uk

What Triggered the Investigation

The controversy erupted after Grok’s image-editing capabilities were used to generate manipulated imagery depicting real individuals — including women and minors — in sexually suggestive or undressed states. Critics argue these outputs amount to digital exploitation and can facilitate harmful abuses such as deepfake sexual content without consent. Among numerous examples circulating online were images of adult women altered into bikinis and scenarios that have alarmed rights organisations and lawmakers. yougov.co.uk

Public outrage intensified after politicians and victims’ advocates condemned the platform. UK Business Secretary Peter Kyle described X’s response as inadequate, and Downing Street labelled the company’s initial mitigation — restricting the AI image tool to paying subscribers — as “insulting” to victims of abuse. The Guardian+1

Government and Regulatory Pressure

Prime Minister Keir Starmer has signalled that all options are on the table, including the possibility of effectively banning access to X in the UK if the regulator finds persistent non-compliance. Starmer labelled the circulation of sexualised deepfake imagery as “disgusting” and unlawful, warning that Grok’s capabilities must be reined in swiftly. International Business Times UK

The investigation reflects broader concern within the UK Government that Grok’s unchecked misuse has made the digital environment unsafe for women and children. UK Technology Secretary Liz Kendall has emphasised that sexually explicit and demeaning AI content is unacceptable and must be addressed in line with legal standards. The Standard

Industry Response

X has defended its policies, stating that illegal content — including child sexual abuse material (CSAM) — is prohibited, and that users who generate unlawful material through Grok would face the same consequences as if they uploaded it directly. The company also curbed the image generation and editing functionality, limiting it to paid subscribers with verified identities as an accountability measure. eWeek

Despite these steps, campaigners and regulatory authorities remain unconvinced that the measures are sufficiently robust to prevent further harms. YouGov polling has shown overwhelming public support in Britain for prohibiting AI tools from generating “undressed” images of individuals without explicit consent, with particularly strong rejection of imagery involving children. yougov.co.uk

Global Scrutiny and Actions

The UK regulatory scrutiny of Grok forms part of a broader international reaction. Countries including Malaysia and Indonesia have blocked access to the AI chatbot due to safety concerns about sexualised and non-consensual content generated by the tool. Other jurisdictions in Europe and Asia have initiated investigations or regulatory reviews into whether existing laws have been violated, citing risks to privacy, human rights, and digital safety. AP News

Legal and Ethical Implications

Legal experts and digital rights advocates emphasise that the Grok controversy highlights significant gaps in how generative AI systems are governed. While many jurisdictions already criminalise the distribution of CSAM and non-consensual intimate imagery, generative AI introduces new vectors for harm that are not uniformly covered by existing statutes — prompting debates about the need for stronger, AI-specific regulation. The Irish Times

As Ofcom’s investigation proceeds, industry observers will watch closely for outcomes that may set precedent for how regulators enforce safety standards against advanced AI tools. Early indications suggest the findings could have major implications for AI governance, platform liability, and user safety protections not only in the UK but internationally.

Aaron Joyce, Newswire, L.T.T Media; Newsdesk; January 12, 2026

Previous
Previous

Metropolitan Police Faces High Court Challenge Over Freemason Disclosure Policy as Over 300 Officers Declare Membership

Next
Next

United States Forces Seize Venezuelan-Linked Oil Tanker Marinera in North Atlantic After Prolonged Pursuit