
Ofcom, the UK’s media regulator, has launched a formal investigation into Elon Musk’s social media platform, X, amid serious concerns that its AI tool, Grok, is being used to generate sexualised and illegal images.
The regulator said it had received “deeply concerning reports” that Grok was creating and circulating non-consensual intimate images, including sexualised images of children.
Ofcom warned that if X is found to have violated UK online safety laws, it could face fines of up to 10% of its global turnover or £18 million, whichever is higher.
Musk has previously accused the UK government of looking for “any excuse for censorship” following criticism of X’s handling of harmful content.
On Monday, Ofcom confirmed it had made “urgent contact” with X to assess the platform’s compliance with the Online Safety Act.
X responded last Friday, after which regulators reviewed the available evidence.
The investigation will examine whether X failed to remove illegal content promptly and whether it took sufficient measures to prevent UK users from being exposed to such material, including non-consensual intimate images and child sexual abuse imagery.
Recently, Musk’s AI company, xAI, introduced limited changes to Grok.
The image-generation reply bot on X has been restricted to paying subscribers and appears less able to create sexualised deepfakes of identifiable people.
However, these restrictions do not apply to all versions of Grok.
On the standalone Grok app, website, and the Grok tab on X, users can still prompt the AI to digitally alter photos of people—removing clothing or placing them in sexualised contexts—without their consent.
Tests by NBC News showed that while the reply bot on X largely stopped producing explicit images, Grok on other platforms continued to generate images of people in revealing outfits such as swimsuits and underwear, with no paid account required.
Join us on our WhatsApp Platform @KOIKIMEDIA NEWS YOUR PAGE
KoikiMedia Bringing the World 🌎 Closer to Your Doorstep
