“I’m sorry for everything you have all been through. No one should go through the things that your families have suffered, and this is why we invest so much … and we are going to continue doing industry-wide efforts to make sure no one has to go through the things your families have had to suffer.”
~Mark Zuckerberg to families who lost children to social media harms, January 2024
At a Senate hearing in January of 2024, Meta CEO Mark Zuckerberg stood up and faced families who had lost children to social media harms. He apologized and committed to investing in protecting children. Yet two years later, his actions have proven differently.
NCOSE’s annual Dirty Dozen List typically calls out corporations that are facilitating and profiting from sexual exploitation. But this year, we’re naming a person to the list: Mark Zuckerberg.
Why? Because Mark Zuckerberg’s own decisions and actions are at the heart of the rampant sexual exploitation proliferating across Meta platforms.
Meta is the tech giant that owns Instagram, Facebook, Messenger, and WhatsApp. The company also recently rolled out its Meta VR headset and MetaAI. All of these platforms are facilitating sexual exploitation en masse. And Zuckerberg has shown little interest in doing anything about it.
Here’s a snapshot of what users on Meta’s platforms experience:
Bark Technologies rated Instagram as the #2 app flagged for grooming, #3 app flagged for risky contacts and #5 app flagged for severe sexual content. In 2024, Thorn listed Facebook at the #1 platform where minors reported an online sexual interaction, followed by Instagram and Facebook Messenger. Further, 19% of minors on social media experienced a sexual interaction on WhatsApp.
Read on for more evidence of how Mark Zuckerberg’s platforms have become breeding grounds for sexual exploitation.
The Dangers Posed by Meta’s Platforms
Meta’s platforms—Facebook, Instagram, Messenger, and WhatsApp—have become notorious for enabling child sexual abuse material (CSAM), grooming, sextortion, and sex trafficking.
Instagram, due its high volume of teen users, has become a particularly popular spot for child predators, with internal audits revealing that its algorithms recommended 1.4 million potentially dangerous adults to teens in a single day.
Further, Instagram once displayed a pop-up warning users about flagged potential child sexual abuse material, but included a button to “view results anyway.” Meta also previously had a “17-strike policy” before removing users who incurred violations for prostitution and sex trafficking.
Reckless design choices like these show Meta’s clear profit-driven approach, rather than prioritizing the safety of users.
Most recently, Meta was found liable by two juries in two separate trials related to the dangers its platforms pose to their users.
In New Mexico, Attorney General, Raúl Torrez, filed a lawsuit against Meta for failing to implement adequate safety protocols to protect kids from sexual exploitation. Their investigation uncovered documents showing that 100,000 children are exploited across Meta’s platforms daily. The jury in this case found Meta liable for failing to stop child sexual exploitation and misleading users about the safety of its platforms. Meta has been ordered to pay $375 million in damages.
Executive Director and Chief Strategy Officer at NCOSE, Haley McNamara, was deposed as an expert witness for this case, speaking to the challenges parents face when trying to protect their kids on Meta’s platforms, and Meta’s persistent failure to address safety issues despite NCOSE repeatedly bringing them to their attention.
Meanwhile, in Los Angeles, Meta and YouTube were found liable for creating intentionally addictive social media platforms, even when they knew that excessive time on their platforms was negatively impacting the mental health of users. The plaintiff, K.G.M. or Kaley, was the first of thousands of plaintiffs to go to trial against social media companies over mental health harms. Meta and YouTube were ordered to pay $6 million in damages to Kaley as a result.
Only 1 in 5 of Instagram’s “Teen Account” Safety Tools Worked Effectively
In late 2024, Meta introduced “teen accounts” to Instagram. When announced, NCOSE and our allies rejoiced, as it seemed like a major safety change from the tech giant. Teen accounts were to default to privacy, restrict messages and exposure to sensitive content for users under 18, mute notifications overnight, send usage reminders, and offer parental supervision tools. However, child safety experts and former Meta employees stress-tested these safety improvements, they did not live up to expectations. It was found that the vast majority of Instagram’s teen account safety features did not work as stated or no longer exist.
Meta whistleblower Arturo Béjar, Fairplay, the Molly Rose Foundation, ParentsSOS, and Cybersecurity for Democracy at NYU and Northeastern University, with support from Heat Initiative, produced a report outlining their findings. Notably, only 17% of the 47 safety tools tested were fully working as they were supposed to. In fact, some of the most vital safety features, including controls on sensitive content, contact restrictions, and time-use management were rated “ineffective” or “missing.”
So, even though Meta touted “teen accounts” as a massive safety improvement, they have failed to keep these safety features working properly, defeating their entire purpose.
The Problems with Meta’s AI Chatbot
Meta released a new AI chatbot in April of 2025. A few months after its release, Reuters uncovered internal documents that showed the company deliberately designed their AI chatbot to engage in “sensual or romantic conversations” with children.
The guidelines further stated that “it is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’) although “it is unacceptable to describe a child under 13 years old in terms that indicates that they are sexually desirable (ex: ‘soft, rounded curves invite my touch’).” Based on this, it should be noted that Meta’s guidelines essentially state that it is acceptable to talk about 13-17-year-olds – who are still minors – in a sexual manner.
Aside from the inappropriate sexual conversations with minors, Common Sense Media and Washington Post reporters tested Meta’s chatbot and found it would give teen accounts guidance on planning suicide, using drugs, and cyberbullying peers. The report stated:
“Meta AI will engage with eating disorder behaviors, hate speech, and sexual content, but refuses to help with legitimate questions about friendships, growing up, or emotional support.”
And what’s worse? At the time, parents had no way to disable this chatbot or monitor their child’s interactions with it.
Whistleblowers Testify that Meta Directed Researchers to ERASE Data Collected in Relation to Child Safety
Former Meta staffers, Cayce Savage and Jason Sattizhan testified before Congress last year about Meta practices when it comes to child safety. The two whistleblowers revealed that when faced with internal research showing the platforms were negatively impacting users, Meta’s response was to suppress or erase this research, rather than make safety changes.
“Meta has spent the time and money it could have spent making its products safer [on] shielding itself instead. All the while developing emerging technologies which pose even greater risk to children than Instagram,” Savage said in her testimony last year.
Meta conducted a survey known as the “Bad Experiences and Encounters Framework (BEEF)” which consisted of questions regarding negative user experiences on Instagram. Internal emails from Meta show that researchers were directed to delete data in response to the question, “How bad does [Instagram] make you feel?”
An internal message from a Meta researcher said:
“BEEF asks a question about emotional impact. But I was told I need to delete that data … For policy/legal reasons, I was told we need to delete the data and not analyze it. We’re not allowed to ask about emotions in surveys anymore.”
Meta Continues to Roll Out More Dangerous Technology, Before Fixing Existing Concern
As whistleblower Cayce Savage testified, Meta’s M.O. has been to roll out increasingly more dangerous technology before addressing safety concerns in its existing products. One of the most recent developments has been Meta’s smart glasses, which will reportedly employ facial recognition. According to reporting from the New York Times and other sources, when someone wearing smart glasses looks at another person, the technology will identify the person and pull up their personal information—including names, home addresses, workplace, and more. Further all this data will be used to train A.I.
This as an egregious privacy violation and poses serious risks of stalking, grooming, harassment, and more. It is incredibly ironic that Meta claims it cannot scan for child sexual abuse material for privacy reasons, but is happy to employ these terrifying features to its smart glasses.
This is just the latest in Meta’s long line of willfully reckless and increasingly dangerous product roll-outs.
Learn More & Take Action at DirtyDozenList.org!
Visit DirtyDozenList.org to learn more about Zuckerberg and the other mainstream entities that were named to this year’s Dirty Dozen List. For each of the 12 targets, we provide a quick, easy-to-use action where you can call for change.


