Social media platform X has tightened the rules for Grok’s image-editing tools after facing criticism over the creation of non-consensual sexual deepfake images. These are fake images made using AI without a person’s permission. While the new rules aim to stop misuse, they have also led to fresh debate instead of ending the controversy.

Under the updated rules, Grok is no longer allowed to edit images of real people to put them in bikinis, revealing clothes, or sexualised outfits. However, users can still create similar content using AI-generated or imaginary characters. This difference has raised concerns among critics.

Elon Musk, who owns X, defended the policy. He said Grok follows what he calls the standard for adult content in the United States. According to him, if NSFW (Not Safe for Work) settings are enabled, Grok can show limited nudity of imaginary adult characters, similar to scenes shown in R-rated movies. He also said that rules may differ based on laws in different countries.

The debate started after a post by DogeDesigner, an account close to Musk, claimed it could not get Grok to generate nude images despite many attempts. The account suggested that media reports were unfairly targeting Musk. In response, Musk publicly challenged users to try breaking Grok’s image moderation system.

After public backlash, X quietly changed how Grok edits images of real people. Prompts that earlier worked now return blurred or censored images. Later, X officially confirmed the change, saying the restriction applies to all users, including paid subscribers.

However, reports by outlets like The Verge found that the system is inconsistent. While direct requests are blocked, users can still get sexualised images by changing the wording slightly. Age checks are weak and easy to bypass, and even free users can access these tools.

Despite new rules, critics say Grok can still be misused easily. X and its AI company xAI blame these gaps on how users try to manipulate the system using clever prompts.