Donate Now

The Misuse of Artificial Intelligence in Producing Child Sexual Abuse Material 

By:

From the convenience of voice-to-text converters and digital assistants like Siri to the unsettling realm of ‘deepfake’ technology and image generation, artificial intelligence has seamlessly merged into our daily lives, shaping our efficiency while advancing forms of child sexual exploitation. 

Before the widespread accessibility of artificial intelligence (AI), a form of producing child sexual abuse material (CSAM, the more apt term for ‘child pornography’) involved cutting out pictures of children and pasting them onto pornographic images, compositing collages of CSAM. Today, predators can download readily available text-to-image AI software to generate CSAM of fictitious children; as well as AI-manipulated CSAM, where real pictures of any child are digitally superimposed into existing CSAM or other sexually explicit material.  

With emerging technology and the never-ending tools to enhance or edit, AI images have reached a level of sophistication that is virtually indistinguishable from genuine photographs. Given the realistic nature of AI-driven CSAM, efforts in identifying and defending real victims of child sexual abuse are interrupted by law enforcement’s difficulty in distinguishing the exploitive material as real or AI-generated.

As reported by the Internet Watch Foundation (IWF), AI CSAM is now realistic enough to be treated as real CSAM, denoting AI-tech as a new route for malicious actors to commercialize and sexually exploit children. For instance, the IWF report found AI CSAM to revictimize real victims of CSAM; providing that predators habitually ‘collect’ material of their preferred victims of CSAM, with ‘deepfake’ technology, they can train AI models and input images of the chosen victim to reproduce explicit content in any portrayal they’d like. The same applies to famous children and youths known by the predator—if a photograph of them is available, any child is susceptible to victimization through AI CSAM. 

It’s happening right now 

Last year, the National Center for Missing & Exploited Children (NCMEC)’s CyberTipline, a system for reporting the web-based sexual exploitation of children, received 4,700 reports of AI-generated CSAMunderscoring the immediate and prevalent threats to child safety posed by generative AI. It’s happening right now. 

In Florida, a science teacher faced ‘child pornography’ charges, admitting to using yearbook photos of students from his school to produce CSAM. A few days later, another Florida man was arrested for ‘child pornography,’ after photographing an underage girl in his neighborhood to synthetically create AI-CSAM; a detective commented, “What he does is he takes the face of a child and then he sexualizes that, removes the clothing and poses the child and engages them in certain sexual activity. And that’s the images that he is making with this A.I.” The same holds in Korea, where a 40-year-old man was sentenced to over two years for producing 360 sexually exploitative images with a text-to-image AI program, using commands such as “10 years old,” “nude,” and “child” to generate hundreds of CSAM.  

These AI applications have resulted in the generation and dissemination of synthetic sexually explicit material (SSEM), including instances where students generate explicit content involving their underage classmates. In Illinois, a 15-year-old’s photo with her friends before a school dance was digitally manipulated into sexually explicit images and shared amongst her classmates. In another instance, male students in a New Jersey high school compiled images from peers’ social media accounts to non-consensually produce and spread explicit photos of more than 30 of their underage female classmates. In Egypt, a 17-year-old girl committed suicide when a boy threatened to distribute and eventually shared explicit digitally altered images of her, causing severe emotional distress upon its dissemination, enduring people’s vile comments, and worrying that her family believed the images were authentic.  

These children victimized through lawless ‘deepfake’ technology detail experiencing extreme violation, anxiety, and depression, where their sense of safety, autonomy, and self-worth are profoundly undermined. Why are corporations still allowed to commercialize and profit from their exploitation? 

Despite its exploitative nature, deepfake pornography has gained immense popularity. The 2023 State of Deepfakes Report disclosed 95,820 deepfake videos online, where 98% of all deepfake videos online were pornographic. For instance, DeepNude, one of the many exploitive applications hosted by Microsoft’s GitHub alluding to “See anyone naked,” received 545,162 visits and nearly 100,000 active users before selling for $30,000.  

As one of the corporations named to the National Center on Sexual Exploitation’s 2024 Dirty Dozen List, Microsoft’s GitHub is a leading perpetrator in the commercialization and exacerbation of synthetic sexually explicit material, hosting the nudifying technology that allows perpetrators to generate realistic synthetic CSAM

AI Technology is Trained on Pre-existing CSAM 

Contrary to AI-manipulated CSAM, AI-generated CSAM features fictitious children, assumedly avoiding the exploitation of real children in the production of sexually exploitative material; however, recent revelations found LAISON-5B, a popular large-scale dataset of image-text pairs used to train Stable Diffusion, inadvertently includes CSAM. Namely, the Stanford Digital Repository investigated the degree of CSAM within the dataset and found 3,226 entries of suspected CSAM. Simply put, the thousands of instances of illegal and abusive material in the open-source dataset suggest Stable Diffusion was once trained on CSAM. 

Other issues include the lack of efficiency in safeguards within several text-to-image generative models, where developers’ safety measures, intended to prevent the generation of harmful content, can be easily bypassed with users’ fine-tuning.  

The vulnerabilities in generative AI and the lack of data governance in LAISON-5B necessitate developers’ exhaustive oversight in AI developments, in addition to the much-needed federal legislation to protect victims of non-consensual deepfake pornography. 

ACTION: Urge Legislators and Corporations to Combat Deepfake Pornography! 

Given the increasing accessibility of AI technology allowing anyone with an internet connection to generate non-consensual deepfake pornography and AI CSAM— federal legislation and corporate accountability is long overdue.  

If left unregulated, hundreds of thousands of cases will turn into millions, ruining more lives and further advancing the generation and dissemination of CSAM. We are pushing legislators and corporations to put meaningful protections in place against deepfake pornography and AI CSAM. 

Please take 60 SECONDS to complete the two important actions below! 

1. Urge your congressional representatives to co-sponsor legislation to combat deepfake pornography and other image-based sexual abuse.

2. Demand Microsoft’s GitHub crack down on deepfake pornography and AI CSAM.

The Numbers

300+

NCOSE leads the Coalition to End Sexual Exploitation with over 300 member organizations.

100+

The National Center on Sexual Exploitation has had over 100 policy victories since 2010. Each victory promotes human dignity above exploitation.

93

NCOSE’s activism campaigns and victories have made headlines around the globe. Averaging 93 mentions per week by media outlets and shows such as Today, CNN, The New York Times, BBC News, USA Today, Fox News and more.

Previous slide
Next slide

Stories

Survivor Lawsuit Against Twitter Moves to Ninth Circuit Court of Appeals

Survivors’ $12.7M Victory Over Explicit Website a Beacon of Hope for Other Survivors

Instagram Makes Positive Safety Changes via Improved Reporting and Direct Message Tools

Sharing experiences may be a restorative and liberating process. This is a place for those who want to express their story.

Support Dignity

There are more ways that you can support dignity today, through an online gift, taking action, or joining our team.

Defend Human Dignity. Donate Now.

Defend Dignity.
Donate Now.