Grok is an AI chatbot that Elon Musk’s xAI made, but it can’t change pictures of real people into bikinis or other revealing clothes anymore. This policy change is a direct response to weeks of complaints and growing regulatory pressure about the tool’s role in making sexualized deepfakes.
Grok has added new protections against editing pictures
The “technological measures” that X calls for are now limiting Grok’s abilities. The main change stops the AI from changing pictures of real people to show them in revealing clothing, like bikinis. This rule applies to all users of the platform, regardless of what level of subscription they have.
Grok will also geoblock “the ability of all users to generate images of real people in bikinis, underwear, and similar attire” in any country or region where making such content is against the law. This is in line with what Malaysia and Indonesia have done recently to block the chatbot because they are worried about safety.

xAI is also charging subscribers to use all of Grok’s image-making tools. This means that only paying X subscribers will still be able to use the AI tool to create or change images. The company says the move is an extra layer of responsibility to keep people from misusing it.
It was only a few hours after California Attorney General Rob Bonta said his office was looking into xAI that Grok’s new limits were made public. The investigation focuses on the production of deepfake, nonconsensual intimate images on a large scale.
One study found that a lot of AI-generated images showed people, including kids, in very little clothing. Regulatory bodies in the UK and other countries have also started their own investigations into the chatbot.



