Ashley looked at her screen in disbelief. She could not fathom the image that was in front of her:
A picture of herself, stripped down to a bikini, and bent over.
“I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” she recalled.
This image and several other sexualized images of Ashley St. Clair, mother of one of Elon Musk’s children, were generated with xAI’s Grok. Musk is the founder of xAI, where his chatbot, Grok, is being widely used to create sexualized images of women and children without consent. One of the sexualized images Grok made of St. Clair was of her when she was only 14-years-old.
When St. Clair reported these sexualized photos to X, the platform removed some of them, but responded that others did not violate any guidelines, despite the fact that they had all been generated without her consent.
But she is not the sole victim. Users are flocking to Grok, where the bot is serving as a factory for sexual deepfakes. According to a Washington Post article published on Jan 6th 2026, the deepfake detection company, Copyleaks, estimated that, “at one point last week, Grok was generating about one nonconsensual sexual image per minute.”
Researchers at AI Forensics examined 50,000 Grok prompts and 20,000 images generated by the tool between December 25 and January 1. Their findings were quite astonishing:
Many image requests for photos to be stripped of clothing completely. More than half of the images generated during this time were of people in underwear or bikinis. Despite Musk saying that sexual images of children are not allowed on Grok, 2% of the images found by AI forensics depicted children, some of them younger than 5.
Users have been instructing Grok to manipulate images of women and children fully clothed to deepfakes, where they are stripped down to a bikini, bent over, on their knees, and other details too graphic for us to name.
Now, Grok has responded to the flood of backlash in the news by announcing that it will supposedly block the functionality to generate sexualized deepfakes in “jurisdictions where it is illegal.” But there are several problems with this purported change.
@danibpinter Grok is creating abuse images of women and children without their consent in response to user requests. This is what happens when their is zero accountability and zero regulation of Big Tech. ENOUGH! Pass KOSA, regulate Tech! @NCOSE #childsafety #onlinesafety ♬ original sound – Dani B Pinter
Undressing Feature still Available on Standalone Grok App
Firstly, reports have found that the new geo-based restrictions only apply when Grok is accessed through the X app. They do not apply on the standalone Grok app. So, in reality, Grok is still allowing the virtual undressing of women everywhere in the world.
Is Grok’s Announcement an Admission of Criminality?
Okay, so Grok said it would stop allowing the generation of sexualized deepfakes in jurisdictions where they’re illegal. This begs the question: why was Grok ever allowing its tool to be used for unlawful purposes?? And why does it still allow this through the standalone app?
By announcing the geo-based restrictions, is Grok admitting to having violated the law? This vindicates NCOSE’s recent call for the Department of Justice and the Federal Trade Commission to investigate Grok for illegal activity.
Will the New Grok Restrictions Apply to the U.S.?
As of Jan 16, 2025, NCOSE tested Grok and found that the feature to strip people down to a bikini is still available in the U.S., both on the standalone app and from within the X app. Will this change as Grok continues to roll out geo-based restrictions? Are the sexualized deepfakes Grok was creating illegal in the U.S.?
Quite possibly, as the TAKE IT DOWN Act was passed last year, which finally made image-based sexual abuse (including AI-generated sexually explicit images), a crime. Many of the images generated by Grok could be considered illegal under the TAKE IT DOWN Act. And even more serious, the sexualized images of children Grok generated could violate federal child pornography and exploitation laws.
However, there is some ambiguity as certain bikini pictures would arguably not be considered sexually explicit. Yet reports have found that Grok users frequently requested “transparent bikinis,” as well as other extremely graphic sexualized imagery. Ultimately, each case will need to be examined to determine whether it violates the TAKE IT DOWN Act or child exploitation laws.
But the bigger question is: what is the value in Grok nudifying and sexualizing women and children without their consent? (That’s rhetorical. There is no value!)
Grok’s New Restrictions Should Apply Across the Board
Of course, our stance is that sexualized deepfakes should be banned COMPLETELY, regardless of the region’s legal framework. They are a form of image-based sexual abuse, and an egregious violation to individuals’ privacy, dignity, and personal safety and wellbeing.
Further, considering that 99% of sexual deepfakes are made of women, this is blatant gender-based harassment. Sexualized deepfakes attack women as a class. As long as tools like Grok can be used in this way, no woman is safe to even exist online. It tells women, “Shut up, don’t even show your face, or else.”
@endexploitation If this doesn't alarm you, it should – because you or your child could be next. Grok is being used to sexually exploit women and children, and Big Tech is looking the other way. Governments are investigating. X is deflecting, so were speaking up. Share this. Demand accountability with NCOSE. #fyp #grok #ai #elonmusk #aichatbot ♬ original sound – NCOSE
Grok’s Half-hearted Changes Further Underscore Callous Attitude
The fact that Elon Musk does not recognize that non-consensually stripping women of their clothing should not be allowed anywhere is abhorrent. Yet it is not surprising, considering his callous and even cruel attitude towards the problem of image-based sexual abuse on Grok.
A couple weeks ago, Musk made light of the problem by reposting an image of a toaster with a bikini on it, generated by Grok, with the caption: “Grok can put a bikini on everything,” followed by two laughing emojis.
This is a slap in the face to survivors who have been violated and traumatized by Musk’s tool.
ACTION: Call on the DOJ and FTC to Investigate Grok for Illegal Activity!
xAI must be held accountable for any violations of the law it may have committed via Grok. Please take action now, calling on the DOJ and FTC to conduct a full investigation and hold xAI accountable!

