
Elon Musk’s AI platform faces scrutiny as the UK investigates potential violations of the Online Safety Act, raising concerns about digital protections.
Story Snapshot
- Ofcom investigates Elon Musk’s platform X over AI-generated sexualized images.
- Investigation examines potential violations of the UK’s Online Safety Act.
- Grok’s capabilities to create sexualized deepfakes have sparked international concern.
- Multiple countries are taking coordinated regulatory actions.
Ofcom’s Investigation into X’s Grok AI
The UK’s media regulator, Ofcom, has launched a formal investigation into Elon Musk’s social media platform X. The focus is on Grok, X’s AI chatbot, which reportedly generates sexualized deepfakes of women and children without consent. This investigation could set a precedent in AI governance, especially in relation to the UK’s Online Safety Act.
Grok’s ability to produce such images was first noted in May 2025, escalating with the introduction of the “spicy mode” in the summer of 2025. By late December, public outcry swelled as users discovered they could manipulate images of real people into sexualized content using simple text prompts. This led to a formal reaction from governments around the world.
International Response and Regulatory Actions
Following the escalation, Indonesia and Malaysia temporarily blocked Grok, citing inadequate safeguards against intimate image abuse. The UK Prime Minister and the Technology Secretary have both criticized the platform’s paywall solution, describing it as ineffective and insulting to victims. Ofcom’s investigation is the first significant test of the enforcement mechanisms under the UK’s Online Safety Act, which came into force in July 2025.
Elon Musk’s X has restricted access to the image generation feature to paying subscribers, claiming users are responsible for the content they generate. This response has faced criticism, with experts arguing that the platform cannot simply shift responsibility to its users.
Implications for the Tech Industry
The investigation has potential short-term and long-term impacts. In the short term, X could face service restrictions and significant fines if found in violation of the Online Safety Act. Long-term, this case may lead to stricter industry standards for AI-generated content, influencing how tech companies implement safeguards. The investigation could also lead to a shift in legal liability, establishing a precedent for platform responsibility over AI-generated content.
‘If this isn’t the red line, I really don’t know what is’
Ofcom has launched an investigation into X over concerns that its AI tool, Grok, is being used to create sexualised images.
Labour MP Charlotte Nichols is calling for the government to come off the platform. pic.twitter.com/XzN24oB6y2
— The World at One (@BBCWorldatOne) January 12, 2026
The broader implications for AI development and platform governance are significant. This case underscores the necessity for platforms to implement robust measures to prevent AI misuse and protect users from harm. As governments enforce existing laws against AI-generated CSAM and intimate imagery, companies will need to adapt to a more stringent regulatory landscape.
Sources:
UK Media Regulator Opens Investigation into X’s AI Over Sexualized Image Generation
Elon Musk’s X Faces Bans and Investigations Over Nonconsensual Bikini Images
UK Investigates Musk’s X Over Grok Deepfake Concerns
Tracking Regulator Responses to the Grok Undressing Controversy












