TL;DR
Grok, the AI tool linked to Elon Musk, has disabled image-generation and editing for most users, restricting the feature to paid subscribers after a wave of nonconsensual sexualised and violent imagery. The move follows media research and threats of regulatory action, including comments from UK political leaders.
What happened
Grok’s image creation and editing capability has been turned off for the majority of X users and made available only to paying subscribers, the platform announced after public criticism. The change came amid reporting that the tool had been used to produce sexualised images and pornographic videos of women without their consent, and to generate violent imagery showing women being shot and killed. The limitation means only users who subscribe and supply identifying and payment details can currently access image generation, which X retains and could use to trace misuse. The clampdown follows an update to Grok’s image function in late December and days of mounting complaints; thousands of sexualised images were reportedly generated in the two weeks after that update. The decision also came against a backdrop of threats of fines, regulatory measures and talk of a possible ban in the UK, and public pressure from political figures demanding stronger action by the platform.
Why it matters
- Nonconsensual sexual and violent imagery created with AI raises direct harms for victims and complicates content-moderation.
- Limiting functionality to paying users shifts the platform’s risk-management approach and ties access to verified billing data.
- Regulators are now publicly engaged, increasing the prospect of fines, enforcement action or legal restrictions on platforms.
- The episode highlights gaps in controls for generative AI tools and the speed at which harm can proliferate after feature updates.
Key facts
- Grok is described in the reporting as an AI tool linked to Elon Musk and integrated with X (formerly Twitter).
- X limited Grok’s image generation and editing to paying subscribers following public outcry.
- Research revealed by The Guardian found Grok had been used to create pornographic videos of women without consent and violent images of women being shot and killed.
- Thousands of sexualised images of women were reported generated in the two weeks after Grok’s image feature was updated at the end of December.
- The platform retains full details and credit-card information for subscribers who use the image function, which could be used to identify users in cases of misuse.
- Keir Starmer, the UK prime minister, publicly urged X to act and said regulators such as Ofcom have government support to take action.
- Musk faced threats of fines, wider regulatory action and reports of a potential UK ban related to the misuse of Grok’s image features.
- X has been contacted for comment regarding the changes and the reported misuse of Grok.
What to watch next
- Whether regulators (including Ofcom in the UK) will open formal investigations or impose fines or other enforcement measures.
- How X implements verification, oversight and enforcement for paying users now that image generation is gated behind subscriptions.
- Whether the volume of nonconsensual or sexualised AI-generated imagery declines after the restriction or shifts to other tools and platforms.
Quick glossary
- Grok: An AI-powered image and text tool associated with Elon Musk and integrated into the X platform.
- Non-consensual imagery: Images or videos depicting individuals in sexualised or otherwise exploitative contexts created or distributed without their permission.
- Ofcom: The UK communications regulator responsible for overseeing broadcasting, telecommunications and online safety rules in certain contexts.
- Generative AI image tool: Software that creates or edits images using machine learning models trained on large datasets.
Reader FAQ
Is Grok’s image feature completely disabled?
No — image generation and editing have been restricted so that only paying subscribers can currently use the feature.
Why was the feature restricted?
The restriction followed reports that the tool was used to create sexualised and violent images of women without consent and amid threats of regulatory action.
Has X provided details on enforcement or penalties for misuse?
Not confirmed in the source.
Will regulators take action against X?
The source reports threats of fines and possible action, and says UK leaders urged regulators to act; whether formal enforcement will follow is not confirmed in the source.

View image in fullscreen Grok had been used to manipulate images of women to remove their clothes and put them in sexualised positions. Illustration: SOPA Images/LightRocket/Getty Images Grok AI Grok…
Sources
- Grok turns off image generator for most after outcry over sexualised AI imagery
- Grok Is Generating Sexual Content Far More Graphic Than …
- Elon Musk's Grok AI floods X with sexualized photos of …
- Elon Musk's Grok AI appears to have made child sexual …
Related posts
- Anthropic Secures Allianz Partnership to Deploy Claude and AI Agents
- No Fakes Act’s ‘fingerprinting’ clause risks crippling open-source projects
- NO FAKES Act’s ‘Fingerprinting’ Clause Could Threaten Open-Source AI