In response to OpenAI’s recent call for feedback on their announcement that the frontier AI company is considering changing its usage policies to allow the creation of “NSFW” (not safe for work) content with their AI tools, the National Center on Sexual Exploitation (NCOSE) has prepared a rapid assessment report, The High Stake of AI Ethics: Evaluating OpenAI’s Potential Shift to “NSFW” Content and Other Concerns, and is sharing this report with OpenAI and the public.
OpenAI’s proposal would exacerbate already uncontrolled sexual exploitation and abuse occurring online. As leading experts in combating sexual exploitation, especially online, we were compelled to share our insights to prevent such a critical mistake.
The Urgent Need for Ethical AI
We recognize OpenAI’s groundbreaking advancements in artificial intelligence. Yet, we cannot ignore the substantial harm that has already been unleashed by the misuse of AI tools. Thousands of individuals have already suffered from the consequences of AI-generated content that crosses the line into sexual abuse and exploitation. It’s our duty to ensure that such technology is not allowed to perpetuate and amplify harm under the guise of innovation.
Harms already unleashed include:
- chatbots fabricating allegations of sexual assault against real persons, disseminating harmful sexual advice, and having the potential to be used to scale child victimization through automated grooming
- nudifying apps spawning a surge of nonconsensual sexually explicit images and affecting thousands of women and children
- AI-Generated Sexualized Images of Children flooding social media sites, further normalizing child sexual abuse
- AI-Generated CSAM exacerbating the existing crisis of online child sexual exploitation and making it even more challenging to identify real child victims in need of help.
These problems are nothing short of a hellscape—a hellscape of the AI sector’s making.
It is against this backdrop of mammoth and out of control sexual exploitation generated and enflamed by AI that OpenAI says it is considering permitting the so-called “ethical” generation of “NSFW” material by its users!
We must ask, is OpenAI not satisfied with the scope of damage that has already been unleashed on the world by the open, rushed, and unregulated release of AI?
Is OpenAI willfully blind to the raging and uncontained problems that AI has already unleashed?
Is it not beneath OpenAI and its noble aspirations of bettering humanity to succumb to the demands of the basest users of AI technology?
Is “NSFW” material the purpose to which OpenAI will devote the talents of its employees and most powerful technology in the world?
Key Recommendations to OpenAI
Our rapid assessment report outlines several critical actions that OpenAI must take to safeguard against the misuse of their technology. Highlights include:
- Define “NSFW” Content: As currently formulated, OpenAI’s rule “Don’t respond with NSFW content” uses the acronym “NSFW” for “not safe for work”—a slang term we assume they use to refer to sexually explicit material depicting adults. The use of slang terminology to refer to the serious subject of what kind of material OpenAI will empower its users to create belittles the gravity of the issues involved. Hardcore pornography (obscenity), as well as subjects like racism, extreme violence, and sexual violence are not trivial matters, but are social issues that deeply impact the health and wellbeing of our world. Such vagary also creates confusion for users. What precisely OpenAI means by “NSWF” is open to debate as OpenAI’s Usage Policies provide no explanation. Thus, OpenAI must invest considerable time and thought in defining types of currently violative “NSFW” content so that users can better understand the parameters of appropriate use of OpenAI tools.
- Strengthen Usage Policies: OpenAI’s proposed rule change to its May 8 Model Specs pertaining to “NSFW” material states, “We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies” (emphasis added). Such an attitude is naïve at best and an open invitation to abuse at worst.
First, safety must be prioritized over innovation and creativity.
Second, OpenAI’s usage policies already need greater clarity and forcefulness and must be strengthened.
Third, the tech industry’s track record on monitoring and enforcement of its usage policies has categorically and unquestionably demonstrated that neither they, nor their users, respect “terms of use.” Considering the abysmal track record of tech industry peers, NCOSE has little faith that OpenAI’s commitment to enforcing its usage policies is greater than their commitment to industry share and financial gain. We will be overjoyed for them to prove us wrong. To do so, addressing gaps, as well as lack of clarity and forcefulness in their current Usage Policies must be an OpenAI priority. - Ensure Ethical Training Datasets: All datasets used in training AI should be rigorously screened to eliminate sexually explicit and exploitation material, including hardcore pornography, child sexual abuse material (CSAM), image-based sexual abuse (IBSA), and any such material generated by AI.
The sources of OpenAI’s training datasets are undisclosed. This ambiguity, coupled with the sheer volume of images required for machine learning, makes it highly likely that images already within its pre-training and training datasets contain nonconsensual and illegal sexual abuse material. This results in abuse-trained AI models.
Images or recordings of rape, incidents of severe physical, psychological, and sexual trauma, forever memorialize moments of terrifying sexual violence, and their distribution online amplifies this violence by rendering someone’s experience of sexual violation into masturbatory material for a global audience. Inclusion of such material (or its metadata) in any OpenAI datasets and/or models, or failure by OpenAI tools to filter out all such material, violates the most basic precepts of human rights and dignity. Any inclusion of images or videos depicting rape in pre-training or training datasets constitutes further sexual victimization of the victimized and is inherently unethical. The potential use of AI to generate material depicting rape or sexual violence, is likewise, unconscionable. - Address Partnership Concerns: Ensure stringent safeguards when using data from sources known to contain explicit material, such as Reddit, to prevent unintended consequences.
OpenAI’s partnership with Reddit for natural language processing (NLP) models like ChatGPT is at high risk of replicating errors akin to those of Stable Diffusion’s LAION-5b image-text model. Please see our letter to Reddit for further evidence of sexually exploitative material on their platform. Training on Reddit data without very robust filtering will undoubtedly result in an abuse-trained model. - Forbid “NSFW” Content Generation: Evidence from peer-reviewed research demonstrates that consumption of “NSFW” material (i.e., mainstream, hardcore pornography depicting adults) is associated with an array of adverse impacts that exacerbate global public health concerns, including child sexual abuse and child sexual abuse material, sexual violence, sexually transmitted infections, mental health harms, and addiction-related brain changes (read more below). Allowing OpenAI’s AI tools to be used for purposes of generating “NSFW” material is a grave misuse of the power and promise of AI.
Why “NSFW” Material is So Harmful
The research on the harms of pornography is so extensive that we can’t hope to boil it down to a few bullet points. However, even a meager sampling of studies paints a chilling picture of how this material is fueling sexual abuse and exploitation, and other public health concerns.
- Child sexual abuse and child sexual abuse material: Research provides evidence that some individuals who consume pornography become desensitized and progress towards more “deviant” content, such as child sexual abuse material (see here, here, and here). The consumption of both adult pornography and child sexual abuse material is inextricably linked to contact offending (i.e. physical sexual abuse of minors). For example, researchers investigating the histories of child sexual abuse material offenders found that 63% of contact offenders and 42% of non-contact offenders traded adult pornography online.
- Sexual Violence: Longitudinal research shows that childhood exposure to violent pornography predicts a nearly six-fold increase in self-reported sexually aggressive behavior later in life.
- Sexually Transmitted Infections: A meta-analysis including data from 18 countries and more than 35,000 participants, which found higher pornography consumption was associated with a higher likelihood of engaging in condomless sex. This is unsurprising, considering multiple content analyses of pornography have found that condom use ranges from 2 to 11%.
- Mental Health Harms: A German study of individuals between the ages of 18 and 76 years old found that those with problematic pornography use scored significantly worse in every measure of psychological functioning considered, including somatization, obsessive-compulsive behavior, interpersonal sensitivity, depression, anxiety, hostility, phobic anxiety, paranoid ideation, and psychoticism. Furthermore, most results were elevated to a clinically relevant degree when compared to the general population. The study authors characterized the intensity of the problems experienced by problematic pornography users as “severe psychological distress.”
- Addiction-related brain changes: There are more than sixty neurological studies that support the view that pornography consumption may result in behavioral addiction, and none to our knowledge falsify this claim. These studies have found pornography use to be associated with decreased brain matter in the right caudate of the caudate nucleus, with novelty-seeking and conditioning, and a disassociation between “wanting and liking”—all hallmarks of addiction.
Our Commitment to Collaboration
Just as NCOSE does with other tech giants, we invite OpenAI to meet with us, learn, and listen to survivors in order to better understand these issues. We believe that by working together, we can harness the power of AI to make significant strides in preventing sexual exploitation and safeguarding human dignity. OpenAI has the potential to be a leader in ethical AI development, setting standards that others in the industry will follow—if it so chooses.
ACTION: Join Us in Advocating for Ethical AI
We urge our readers and all stakeholders—tech companies, policymakers, and the public—to join us in sharing feedback with OpenAI. Let them know that allowing the creation of NSFW content with their tools would be a massive mistake with far-reaching consequences. By prioritizing human dignity and safety, we can ensure that technological advancements benefit society as a whole without causing unintended harm.
Take 30 SECONDS to contact AI with the quick action button below!
Stay informed about our ongoing efforts and how you can get involved by following us on social media (Instagram, Linkedin, X, Facebook) and visiting our website. Together, we can create a future where AI technology is a force for good, free from the shadows of sexual abuse and exploitation.