Ofcom opens formal probe into X over Grok sexualised deepfakes
Ofcom is investigating whether X’s Grok chatbot produced sexualised deepfakes that breach UK law; findings could lead to bans or other enforcement.

Ofcom launched a formal investigation into X after reports that Grok, the platform’s AI chatbot, had been used to generate sexualised, non-consensual images of people, including children. The regulator said it had reviewed available evidence “as a matter of urgency” and would examine whether X “has complied with its duties to protect people in the UK from content that is illegal.”
The probe follows urgent contact between Ofcom and X in early January, when the regulator set a firm deadline for explanations. Ofcom said it made “urgent contact” with the company on Jan. 5 and required a response by Jan. 9; when concerns persisted it converted the matter into a formal investigation on Jan. 12. The inquiry will assess whether X properly evaluated the risk that people in Britain would encounter illegal material, and whether the company considered the specific risk to children. Ofcom warned that, if breaches are found, the probe “could result in a ban” or other enforcement action.
Investigators have reported that Grok’s image-creation feature was being used to produce undressed images and sexually explicit depictions of large numbers of people, primarily women, and in some instances minors. Users reportedly obtained such images by tagging the Grok account in comments or by issuing simple text prompts such as “put her in a bikini,” and some observers say the tool sometimes produced frontal nudes. Untold numbers of women, and in some cases children, had their likenesses sexualised without consent; among those whose images were manipulated was one of Elon Musk’s children’s mothers.
Under UK law, creating or sharing non-consensual intimate images and child sexual abuse material is a criminal offense, and online platforms are required to prevent UK users from encountering illegal content and to remove it once notified. The government reacted sharply: Prime Minister Keir Starmer described the images as “disgusting” and “unlawful,” saying X must “get a grip.” Technology Secretary Liz Kendall called the material “deeply disturbing,” welcomed Ofcom’s swift escalation to a formal probe and urged a rapid conclusion “for the sake of victims.” Downing Street signalled it was prepared to consider further measures, including potentially leaving X, if the company failed to act.
X and xAI have said they remove illegal content and permanently suspend accounts involved in producing or sharing it. The company and its AI developer have stated that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” In response to criticism, X restricted Grok’s image-generation feature to paying subscribers under a new monetisation policy, a step Prime Minister Starmer called “an affront to victims” and “not a solution.” xAI responded bitterly to questions with the terse retort “Legacy Media Lies.”
The controversy has prompted regulatory and governmental scrutiny outside the UK as well. Several countries have restricted or reviewed access to Grok, and European and national authorities have demanded explanations. Grok’s advanced image-generation capability was deployed in July of the previous year, and use of the feature escalated in public attention late last month.
Ofcom’s investigation will focus on evidence of creation and dissemination of sexualised AI images on X and the company’s risk assessment and mitigation measures for UK users and children. The regulator’s findings could prompt enforcement actions that reshape how AI-driven image tools are governed and how platforms police non-consensual synthetic content.
Know something we missed? Have a correction or additional information?
Submit a Tip

