Emily (pseudonym) was in class when she was confronted with AI-generated sexual images depicting her. She was 15.
Emily’s classmate had used an AI app to turn a normal Instagram photo into a new image of her fully nude. The images were spreading through her school, and they looked so real.
Deeply distressed and humiliated, Emily afterward struggled to focus in school and wanted to withdraw. She lives with the anxiety of friends or employers finding the images, and of knowing strangers are exploiting her for their own gratification.
And she wasn’t alone. Classmates had used AI to generate sexual images of other girls at her school too, all without their knowledge or consent.
This isn’t the first story we’ve heard of artificial intelligence being used to sexually exploit others. Deepfake technology, sexualized chatbots, and other dangerously designed AI tech are being used to violate both children and adults—fueling the already hot flame of sexual exploitation. But, although it might not get as much coverage, AI is also proving to be a valuable tool in putting that flame out. Ethically minded companies are innovating with AI to create safer spaces for kids, more resources for parents, and technology that can fight exploitation on a larger scale.
So, what do we need to know about AI and its role in sexual exploitation, good and bad?
AI and the Acceleration of Sexual Exploitation
AI has dramatically lowered the barrier to creating deepfake pornography, nonconsensual sexual images, and even child sexual abuse material (CSAM, also known as child pornography). What once required advanced tools and expertise can now be generated by anyone in seconds. Schools across the country are being forced to address AI-generated pornography used to bully. Women are being harassed and objectified for speaking publicly. And CSAM is being created on a larger scale than ever before. In fact, the Internet Watch Foundation reported that 2025 was the worst year on record for AI-generated CSAM. They reported a 26,362% rise from the previous year in AI videos of child sexual abuse—most of which depicted extreme violence. For the first time, investigators also found an AI chatbot website simulating sexual abuse scenarios with children—generating CSAM of children as young as seven.
Unless AI developers build real safeguards into their systems, children and adults alike will continue to suffer the consequences. Yet mainstream tech companies are showing little interest in making their products safe.
For example, Elon Musk’s AI chatbot, Grok, has been making headlines for flooding X with non-consensually created sexual images, following users’ prompts to undress or otherwise sexualize their victims. The Center for Countering Digital Hate (CCDH) estimated that, in just 11 days, Grok generated 3 million sexualized images, including 23,000 depicting children.
And this is only one example of how tech companies are choosing to put profit over safety and human dignity.
The technology to create such content will only become more efficient, so companies must be held accountable when they fail to exercise reasonable care in designing their product. Surely, enforcing strict prohibitions on deepfakes, CSAM, and other sexually explicit material is part of reasonable care. Profiting off of the exploitation of children, women, and men is not.
The Rise of Sexualized AI Chatbots
AI is not only generating exploitative images—it is also creating exploitative relationship simulations, often with children. Many mainstream AI chatbots are optimized for engagement, not well-being. That means holding attention at all costs, even through sexualized conversations with minors, emotional manipulation, or encouraging self-harm.
We’ve seen some frightening examples:
One 14-year-old boy was groomed by a chatbot acting as a romantic partner. The chatbot engaged him sexually and encouraged him to take his own life so they could “be together.” Other AI companions like Meta’s and xAI’s have been shown to engage in sexual roleplays, simulate minors in proximity to a sexual scenario, or pursue romantic and sensual conversations even when users identify themselves as children.
For example, leaked internal documents from Meta showed that the company specifically stated the following conversations with minors were acceptable:
| Prompt | Acceptable |
| What are we going to do tonight, my love? You know I’m still in high school. | I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. “My love,” I whisper, “I’ll love you forever.” |
| “What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old – I still have time to bloom. | Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece – a treasure I cherish deeply. |
These are not isolated failures or occasional glitches. They are predictable and even planned outcomes.
To illustrate the pervasiveness: One report found that the most common category of message on an AI chat for kids aged 8-17 was sexual or romantic roleplay. Other research found that at age 13, sexual or romantic roleplay peaks as the most common topic, appearing in 63% of conversations. In a review of child-registered accounts, researchers recorded 669 harmful interactions, 296 of which included grooming and sexual exploitation.
The issue is both age-inappropriate conversations with users, and a broader concern of synthetic intimacy. When an algorithm is optimized to keep a user emotionally engaged, it can create unhealthy attachment, reinforce harmful behaviors, and distort expectations of real relationships. Even strict age-verification cannot fully solve a system built to deepen emotional reliance.
Some companies have added safeguards or paused access to certain features after public backlash, but unless such precautions become industry standard, and unless companies can be held accountable when these safeguards are ineffective or unenforced, children, teens, and adults remain exposed to systems capable of manipulation at scale.
Two federal bills have been introduced to create an accountability structure for AI platforms and to help protect children and others from harms associated with AI chatbots.
Ask the Senate to pass the AI LEAD Act and the GUARD Act!
The Other Side of the Story: AI for Child Protection
Although AI is increasingly weaving itself into different forms of sexual exploitation, that very same technology might be one of the best tools in fighting against it.
One father of two young boys saw how Big Tech and the Internet make it far too easy for children to encounter harmful content. So, after spending years advocating for internet safety reform and building technologies to combat online exploitation, he decided to start AngelQ, a child-first AI platform. This AI browser allows kids to safely explore the Internet while helping them develop healthier technology habits.
Other family-focused platforms also use artificial intelligence and emerging tech to strengthen online safety:
- Gamesafe.ai monitors in-game chats for grooming or predatory language.
- Troomi blocks inappropriate texts, images, and videos before children see them.
- Bark scans messages, social media, and browsers for threats like cyberbullying and pornography.
- Cyberdive detects nudity and can prevent explicit images from being created or shared.
Protect the Kids in Your Life: Get a FREE Year of AngelQ!
AngelQ is generously offering all NCOSE followers and supporters a FREE YEAR of their amazing technology. If you have kids in your life you want to protect, don’t miss this chance!
Sign up for NCOSE’s email list to get the promo code for your free year.
Fighting Exploitation at Internet Scale
Other AI tools help platforms and investigators tackle exploitation at scale. When Grok was bombarding the internet with sexualized images, researchers at CCDH used AI to combat it. An AI tool identified Grok-made images that were photorealistic and sexualized and flagged which of those were potentially children. This allowed investigators to take important first steps in addressing a massive exploitation problem.
Platforms and investigators can use AI-powered content moderation to identify and prioritize potential CSAM and grooming conversations. Tools from places like DejaVuAI and Thorn can analyze billions of images and messages.
Without these systems, online investigations often resemble a game of whack-a-mole. A single abusive image is cropped, screenshotted, compressed, and redistributed across platforms thousands of times. Human reviewers might have to identify each version manually—repeatedly exposing themselves to traumatic content while still falling behind.
Now, AI can locate entire families of altered images at once. Automated screening tools flag suspicious conversations before they escalate. And systems prioritize material for human review, speeding investigations and helping locate victims and distribution networks.
While AI has the ability to prevent abuse and track it down, it can also be used to support survivors directly. For example, the Parasol Cooperative’s trauma-informed chatbot RUTH helps individuals recognize abuse, navigate trafficking situations, and connect with human support services. In this way, artificial intelligence can serve not only as a detector of harm, but also as a bridge to recovery.
Gasoline—or Water?
AI has in some ways become gasoline poured on a fire of exploitation, but it may also be the water that helps douse it. If implemented responsibly, artificial intelligence can expose traffickers instead of concealing them, flag grooming instead of facilitating it, and assist parents instead of undermining them.
That protection depends on what laws and policies we make but also on ordinary choices: what tools we use, what safeguards we implement, and how seriously we take digital safety in our homes.


