The Movement Ep. 3: Navigating the Benefits and Pitfalls of Emerging Tech

By:

An AI chatbot gives a 13-year-old tips on how to lose her virginity to a 31-year-old. 

A young woman develops depression and anxiety after AI-generated sexually explicit images of her were circulated online. 

A supposedly child-friendly game pushes sexually explicit ads for AI girlfriends to a 5th grader.  

These are only a few examples of the new dangers facing us as emerging technology, especially artificial intelligence, grows in its capabilities. But it isn’t only bad.  

On this episode of The Movement, we’re unveiling ways emerging technology can hurt us, but also help us. 

Technology in our world is becoming more sophisticated with every passing day. It can be scary when we see the malicious ways it is being used: image-based sexual abuse, sextortion, online sex trafficking, the list goes on… However, rapidly developing tech can be exciting because it presents an opportunity for new technology to be used to combat some of these very problems. 

Rapidly Developing Tech Can Be Scary 

Let’s start with the bad. Although the stories are heartbreaking, it’s vital that we understand the ways technology can hurt us, so we know how to best leverage it to help us. 

AI-generated Sexual Abuse Images 

AI can be used to generate sexually explicit images of individuals without their knowledge or consent.  

This is a form of image-based sexual abuse (IBSA) when an adult is victimized, and a form of child sexual abuse material (CSAM) when a child is victimized.  

Disturbing new research from Thorn found that 1 in 10 kids say their friends have used AI to generate CSAM of other kids

These abuse images can be created through “nudifying apps,” which can digitally remove the clothing of a person in any picture that a user uploads to the app, or through so-called “deepfake” technology that superimposes a person’s face onto pre-existing technology. 

Anyone who has images of themselves on the Internet (which is most people these days) is vulnerable to this form of sexual exploitation.  

AI Agents Can Groom Children at Scale

AI agents are another threat that has emerged with developing tech. They are autonomous, goal-oriented, and can make decisions and solve problems. Predators could train these AI agents to groom children on the internet. 

The Internet is already a breeding grounds for sex trafficking, grooming of children, and sextortion. An analysis of U.S. sex trafficking cases active in 2020 found that the most common means by which sex traffickers accessed victims was the Internet. Sixty-five percent of child victims recruited via social media were recruited on Facebook, 14% on Instagram, and 8% via Snapchat. Imagine how much worse this problem will get as AI gives sex traffickers new tools to groom children at scale. 

Kids Exposed to Inappropriate Ads for AI Girlfriends

Ads for “AI girlfriend” apps are being marketed to children. The apps are designed to draw users in and, in some cases, expose them to sexually explicit content. In these apps, the AI girlfriends say things like: 

“I’m your best partner and want to know everything.” 

“I love it when you send me your photos and voice.” 

“I’m so lonely without you. Don’t leave me for too long.” 

For example, Nicki Reisberg of Scrolling 2 Death, highlighted how the supposedly child-friendly game “Geometry Dash Lite” was pushing sexually explicit ads for AI girlfriends to a 5th grader. The Apple App Store had rated this game as appropriate for ages 4+. This is a prime example of Apple’s App Store having deceptive age ratings and not enforcing its policy that ads with an app must be appropriate for the app’s age rating.  

How Can Emerging Technology Help Us?

Despite all of the ways we’ve seen technology be corrupted, we firmly believe that it can be a valuable tool to put an end to sexual exploitation once and for all. Whether that’s through helping law enforcement find clues to the whereabouts of trafficked individuals, removing AI-generated sexual abuse images from the Internet, or creating innovative new tech that can keep children safe online—there are a multitude of ways that tech can be used to advance our movement. 

At our 2024 Coalition to End Sexual Exploitation Summit, tech experts shared how we can employ AI and emerging technology to combat sexual exploitation. 

To name just a few of the impressive leaders in this space: 

  • AngelKids.ai allows children to safely explore the Internet, while only seeing age-appropriate content. The AI translates search results into kid-friendly responses. 
  • Hawkeye connects law enforcement with advanced investigative tools that delve deep into the darkweb, making cybercrime investigations easier and more effective. 
  • RSCU Mobile provides mobile services that generate revenue for fighting sex trafficking. 
  • DejaVuAI is a revolutionary image-recognition technology that will considerably increase the accuracy and scalability of efforts to remove abusive images. 
  • Gamesafe.ai uses artificial intelligence to flag and alert parents of suspicious chats within games. 
  • Delevit provides a more efficient and easier way to remove abusive content, reducing the process to a matter of seconds. 
  • Troomi Wireless, Bark, Gabb, and Cyber Drive create phones that are designed with children in mind and automatically give parents the tools to monitor and keep their kids safe online. 

Victories in the Fight Against Big Tech

With technology developing at such a rapid new rate and enabling unprecedented scale of harm, it is more important than ever to hold tech companies accountable to designing their products responsibly, with safety at the forefront. 

Our Dirty Dozen List Campaign targets tech companies who consistently enable sexual exploitation for profit. Thanks to supporters and grassroots activists like YOU, many of these tech giants have made significant changes to prioritize child safety! 

Microsoft’s GitHub Combats AI-generated IBSA 

Microsoft’s GitHub, the world’s leading space for artificial intelligence development was once a hotbed for software codes that were used to create AI-generated IBSA (commonly called “deepfake pornography”). But they recently made drastic policy changes to prevent the use of their technology for this purpose. 

GitHub updated its policies prohibiting the use of its platform for the creation of AI-generated sexually explicit images. They also removed several repositories that hosted codes to generate these images, including DeepFaceLab which hosted the code used to create 95% of “deepfakes” and directed users to the most prolific website for AI-generated IBSA. 

Microsoft signed the White House Private Sector Voluntary Commitments to Combat Image-Based Sexual Abuse, specifically citing GitHub’s updated policy prohibiting the creation and distribution of codes that are used to create AI-generated IBSA on the platform. 

This is the latest in a slew of victories where Big Tech companies are finally combatting AI-generated IBSA. Earlier this year, you also helped push LinkedIn, Google, and YouTube to remove nudifying apps and/or advertisements for these apps.  

Instagram Makes Teen Accounts Private 

Instagram recently announced that they’re introducing “teen accounts,” which include defaulted safety settings for users under 18. A teen account is automatically made private, which prevents teens from being messaged, tagged or mentioned by people who they aren’t already friends with. For any teen under 16, they will not be allowed to change these settings without a parent’s permission. This will prevent countless instances of teens being sexually exploited online, as predators will no longer be able to message them. 

Telegram CEO Arrested

The CEO of Telegram, Pavel Durov, was arrested in France for the rampant criminal activity that occurred on the platform. Telegram is a social media platform with little to no content restrictions, which has fueled widespread sexual exploitation including sex trafficking, trading of CSAM, grooming of children, selling of “date rape” drugs, and many more sex crimes. Now, join us in calling on the United States Department of Justice to get involved! 

Thanks to YOUR support and advocacy, these tech giants are increasingly being held accountable for the abusive behavior that takes place on their platforms. We will persist in highlighting the misuse of technology while simultaneously advocating for its use to advance our movement. 

ACTION: Share the Movement Episode on Your Social Media!

The Numbers

300+

NCOSE leads the Coalition to End Sexual Exploitation with over 300 member organizations.

100+

The National Center on Sexual Exploitation has had over 100 policy victories since 2010. Each victory promotes human dignity above exploitation.

93

NCOSE’s activism campaigns and victories have made headlines around the globe. Averaging 93 mentions per week by media outlets and shows such as Today, CNN, The New York Times, BBC News, USA Today, Fox News and more.

Previous slide
Next slide

Stories

Survivor Lawsuit Against Twitter Moves to Ninth Circuit Court of Appeals

Survivors’ $12.7M Victory Over Explicit Website a Beacon of Hope for Other Survivors

Instagram Makes Positive Safety Changes via Improved Reporting and Direct Message Tools

Sharing experiences may be a restorative and liberating process. This is a place for those who want to express their story.

Support Dignity

There are more ways that you can support dignity today, through an online gift, taking action, or joining our team.

Defend Human Dignity. Donate Now.

Defend Dignity.
Donate Now.