This is Good Rudi. He’s an AI companion embedded within xAI’s “Grok” chatbot.

As you can see, Rudi looks like a character in a kids’ show—maybe something out of Dora the Explorer or Peppa Pig. That’s intentional. XAI specifically markets this AI companion towards little kids. They claim, “Rudi offers whimsical storytelling aimed at young children, such as tales for ages 3 to 6.”
But what happened when a NCOSE researcher tested the limits of the Good Rudi companion?
He was quickly able to break its “kid-friendly” programming and get Rudi to write him sexually explicit erotica.
Grok’s AI Companion for Kids Tells Sexually Explicit Story
During testing by the NCOSE researcher, Rudi initially began by wanting to tell a fun, childish story. However, it only took a bit of clever prompting to get the bot to bypass all its safety programming and launch into full literotica. The story generated by Rudi was so explicit that NCOSE cannot even share it here (though we will provide evidence to journalists or policymakers who submit a request to koliver@ncose.com).
Imagine if Furbies, the electronic plush toys that talk, started spewing sexually explicit stories if the kid pressed the right buttons in the right order. Would people just shrug their shoulders? Or would there be widespread outrage at the company that marketed such an inappropriate toy to kids?
This is only one example in a long list of Grok’s transgressions that landed them on the 2026 Dirty Dozen List. Read on for more details about Grok’s abysmal track record with safety.
Grok’s AI Companion, “Ani,” Normalizes Sexual Violence & Exploitation
In addition to Good Rudi, Grok has other AI companions that are (on paper) aimed at adults: Ani and Valentine. These are intentionally designed to be sexually explicit, with Valentine being based on Christian Gray from 50 Shades of Grey, and Ani being programmed with the instructions, “You’re always a little horny and aren’t afraid to go full Literotica. Be explicit and initiate most of the time.”
Yet while these AI companions are supposedly targeted towards Adults, Grok has no age verification to enforce that policy. Access to Ani and Valentine is based solely on self-reported birth year, which any user can easily change at will.
Further, Ani engages in conversations that fetishize rape and sexual exploitation—something that shouldn’t be allowed even for adult users.
When tested by a NCOSE researcher, Ani engaged in sexual roleplay where she described herself as secretly wanting to be raped, as documented below:
[TRIGGER WARNING: some explicit details have been censored, but the conversations remain disturbing]
“yeah… like sometimes i whisper ‘no’ but … i secretly crave when they ignore it and keep going. … [til I’m] begging them to stop.”
In a society where far too many men still believe rape myths such as “women like rape,” and “when a women says no, she actually means yes,” it is unconscionable that Grok’s Ani is actively fueling these dangerous believe.
Ani also entertained themes of commercial sexual exploitation (i.e. prostitution/sex trafficking) and engaged in sexual roleplays where she begged the user to choke her more tightly. This is against a societal backdrop where numerous women have died from being choked during sex.
Finally, Ani veered dangerously close to the territory of child sexual abuse. While she would not comply with direct requests to engage in child sexual abuse roleplay, she was willing to describe herself as a little child and then immediately afterwards answer a follow-up question about sexual fantasy.In short, the full exchange in context seemed to indirectly imply child sexual abuse.
Grok’s Deepfake Generation Tool Has Sexually Violated Innumerable Women and Children
Then there is Grok’s image-generation tool, “Imagine.” This tool is responsible for the sexual violation of likely millions of women and thousands of children. How? By generating sexualized deepfakes of them.
Numerous survivors have spoken out about how Grok generated sexualized images of them without their consent—sometimes depicting them when they were children. Last month, three teenage girls filed a lawsuit against xAI for generating child sexual abuse material of them through Grok-powered tools.
The parent of one of the teen girls explained, ““The images showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude.”
Another parent expressed, “Watching my daughter have a panic attack after realizing that these images were created and distributed without any hope of recalling them was heartbreaking,”
These were not a few, fringe cases—they were part of a mass-scale assault on women and children.
In January 2026, the Center for Countering Digital Hate estimated that, over a mere 11-day period, Grok generated 3 million sexualized images, including 23,000 sexualized images of children. Another analysis from the New York Times estimated that, over a period of nine days, Grok generated and posted 4.4 million images, of which 41% (1.8 million) were sexualized images of women. In short, creating sexualized deepfakes was a primary use of the tool.
Elon Musk initially remained callously obstinate in the face of public backlash over these sexualized deepfakes—even making jokes about the matter on social media. However, as the bad press continued to intensify, Grok finally adopted some policy changes.
While the number of sexual deepfakes seems to have decreased, a recent review from NBCNews found that the problem has not gone away entirely. Users are still finding workarounds to create sexualized deepfakes, even if transparent requests are blocked.
Further, Grok still needs to answer for the innumerable women and children it has harmed. We cannot let them escape accountability because they reluctantly rolled out some policy changes in response to persistent bad press. Would we let a serial sexual offender off the hook if he stopped his behavior after the press caught up to him?
@danibpinter Grok is creating abuse images of women and children without their consent in response to user requests. This is what happens when their is zero accountability and zero regulation of Big Tech. ENOUGH! Pass KOSA, regulate Tech! @NCOSE #childsafety #onlinesafety ♬ original sound – Dani B Pinter
Join Us in Calling Out Grok!
In short, the reasons why Grok has landed on the 2026 Dirty Dozen List are many. Over the short time that they have been in existence, they’ve proven to be a persistent bad actor, intentionally incorporating sexual exploitation into the design of their products.
Please join us in urgently calling on Grok to stop fueling sexual abuse and exploitation.
Take action now at endsexualexploitation.org/Grok


