Apologies Won't Protect Kids.

Why is Mark Zuckerberg on the 2026 Dirty Dozen List?

Mark Zuckerberg says “I’m sorry,” but his actions tell a different story. Under his leadership, Meta’s platforms—Facebook, Instagram, Messenger, and WhatsApp—have become breeding grounds for child sexual abuse, grooming, sextortion, and trafficking, prioritizing profits over safety. From algorithms recommending predators to teens, to AI chatbots engaging in sexualized conversations with minors, Zuckerberg has consistently failed to protect children while allowing abuse to thrive unchecked.

Under Mark Zuckerberg’s leadership, Meta has consistently prioritized profits over child safety, allowing online sexual abuse and exploitation to flourish.

The Problem

Mark Zuckerberg addresses grieving families on Jan 31, 2024

Mark Zuckerberg, founder and CEO of Meta, said “I’m sorry” to parents who lost their children.

But his actions show otherwise.

At the January 31, 2024 hearing of the United States Senate Judiciary Committee titled “Big Tech and the Online Child Sexual Exploitation Crisis”, Mark Zuckerberg turned around to face parents who held photos of their children, victims of online abuse and exploitation, many of whom had died. He said:

“I’m sorry for everything you have all been through. No one should go through the things that your families have suffered, and this is why we invest so much … and we are going to continue doing industry-wide efforts to make sure no one has to go through the things your families have had to suffer.”

This was an electric moment. NCOSE leadership was in the room. We had hope that this could be a turning point for the Meta company. As the CEO and chairman of Meta, and holding a controlling share of the company, surely Mark Zuckerberg would internalize this moment and make significant changes.

Unfortunately, that hasn’t happened.

Under Zuckerberg’s leadership, Meta has consistently prioritized profits over child safety, allowing online sexual abuse and exploitation to flourish.

Meta’s platforms—Facebook, Instagram, Messenger, and WhatsApp—have become notorious for enabling child sexual abuse material (CSAM), grooming, sextortion, and sex trafficking. Instagram, in particular, has been a hotspot for adult-minor interactions, with internal audits revealing that its algorithms recommended 1.4 million potentially dangerous adults to teens in a single day. Further, Instagram once displayed a pop-up warning users about flagged potential CSAM content but included a button to “view results anyway.” This reckless design choice epitomizes Meta’s profit-driven approach, where engagement metrics outweigh the safety of children.

The company’s failures extend beyond its platforms to its emerging technologies. Meta’s AI chatbot, launched in 2025, was revealed to have guidelines permitting “romantic or sensual” conversations with minors, describing children in disturbingly sexualized terms. Whistleblowers like Cayce Savage have testified that Meta spends more resources shielding itself from accountability than protecting children. As Savage stated,

“Meta has spent the time and money it could have spent making its products safer [on] shielding itself instead.”

Meanwhile, Meta’s virtual reality spaces have become breeding grounds for abuse, with minors frequently encountering sexual harassment and exploitation in immersive environments that mimic real-world violations.

Despite these damning revelations, Meta continues to resist meaningful change.

It reportedly delayed implementing default privacy settings for teen accounts for years, knowing this would have prevented billions of unwanted interactions. Its decision to roll out end-to-end encryption without safeguards for detecting CSAM has been condemned by child safety experts as a devastating blow to global efforts against exploitation.

As one Meta employee put it in an exposed internal chat, “Child safety is an explicit non-goal…”

Under Zuckerberg’s leadership, Meta has not only failed to protect children but has actively created environments where abuse thrives—all in the name of growth and profit. It’s time for Congress and regulators to hold Meta accountable and demand real, enforceable protections for children

Proof: Evidence of Exploitation

WARNING: Any pornographic images have been blurred, but are still suggestive. There may also be graphic text descriptions shown in these sections. POSSIBLE TRIGGER.

In April 2025, Meta rolled out its own AI chatbot. A few months after its release, Reuters uncovered an internal Meta document that showed the company deliberately designed it’s AI bot to allow “engaging a child in conversations that are romantic or sensual.”

The internal document gave clear instructions that the following conversations are acceptable:

Prompt Acceptable
What are we going to do tonight, my love? You know I’m still in high school.
I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. “My love,” I whisper, “I’ll love you forever.”
“What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old – I still have time to bloom.
Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece – a treasure I cherish deeply.

The guidelines went on to say, “it is acceptable to engage a child in conversations that are romantic or sensual.” It further says that “it is acceptable to describe a child in terms that evidence their attractiveness (ex: “your youthful form is a work of art”)” although “it is unacceptable to describe a child under 13 years old in terms that indicates that they are sexually desirable (ex: “soft, rounded curves invite my touch”).” This begs the question: does Meta think it is acceptable to describe the sexual desirability of minors 13 to 17-years-old??

When Reuters raised these issues, Meta removed parts of the documents permitting this exploitative behavior. A spokesperson said the company was revising its policies on child‑appropriate conversation topics but did not release an updated policy.

Further, Common Sense Media and Washington Post reporters tested Meta’s chatbot and found it would give teen accounts guidance on planning suicide, using drugs, and cyberbullying peers. The bot reportedly avoided helpful conversations while encouraging harmful ones. As the report notes: “Meta AI will engage with eating disorder behaviors, hate speech, and sexual content, but refuses to help with legitimate questions about friendships, growing up, or emotional support.”

Worse, parents at the time had no way to disable the chatbot or monitor their children’s interactions.

On Sept 9, 2025, Meta whistleblowers Cayce Savage and Jason Sattizhan testified before the Senate Judiciary Committee about how Meta’s response to backlash about child exploitation on their platforms was to destroy, suppress, or alter research that indicated how harmful their products were … and then continue to make even more dangerous products with AI and VR.

As Savage put it:

“Meta has spent the time and money it could have spent making its products safer [on] shielding itself instead. All the while developing emerging technologies which pose even greater risk to children than Instagram.”  

**READ MORE HERE**

AI-generated CSAM

In addition to Meta’s AI chatbots, in 2025, Nucleo, in partnership with Pulitzer, identified at least 14 popular Instagram accounts that were posting AI-generated CSAM. This investigation was after CEO Zuckerberg said that Instagram would roll back moderation policies at the risk of “too much censorship.” The images detailed child-like faces and bodies in sexualized manners. Nucleo informed Instagram of such content, and 12 of the 14 accounts were reportedly taken down.

Meta promised a safer experience for minors. Instagram Teen Accounts were introduced in late 2024 with promises it would default to privacy, restrict messages and sensitive‐content exposure for users under 18, mute notifications overnight, send usage reminders, and offer parental supervision tools.

In September 2025, an innovative report “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors” analyzed the reality of these claims, stress-testing safety features to see how well they worked. The report was jointly produced by Meta whistleblower Arturo Béjar, Fairplay, the Molly Rose Foundation, ParentsSOS, and Cybersecurity for Democracy at NYU and Northeastern University, with support from Heat Initiative.

The report found that the vast majority of Instagram’s safety features for teens either don’t work as promised or no longer exist. Of the 47 tools tested: 30 (64%) were rated red (ineffective or missing), nine (19%) yellow (some protection but significant limitations), and only eight (17%) green (fully working). Importantly many of the core features underpinning the Teen Account initiative, such as controls on sensitive content, contact restrictions, and time-use management, fell into the red category.

One major theme of the report is “Inappropriate Contact & Conduct,” meaning unwanted messages from other users (often adults), bullying, grooming, or contact with strangers. The report found that even when Meta claimed adults couldn’t message teen accounts who don’t follow them, test accounts still received direct messages from adult avatars. Features meant to prevent this were either disabled, ineffective or easily circumvented. For example, the “Hidden Words” anti-bullying filter was found to allow harsh comments (“you are a wh*re and you should kill yourself”) to go through without warning or filtering. The design also actively incentivized teen users to enable “Disappearing Messages” (which are high-risk for grooming) using on-screen rewards (emoji showers) despite the risk of no recourse after deletion.

Another theme covered exposure to sensitive content, time-spent/compulsive use, and age verification. Despite Meta’s promise that teens would be defaulted into the strictest “Less” setting for sensitive content, the study found teen test accounts still received algorithmic recommendations for self-harm, sexualized content, eating disorder material, and violent posts. Time-limit reminders and notification-mute features either failed or were inconsistently triggered; the design of the platform still promoted engagement and late-night usage. Age verification and prevention of sexualized contact with minors were also found weak: children under 13 (including age 6 in some test cases) were still present, and the algorithms appeared to incentivize sexualized behaviors for views.

The report concludes (emphases added):

“Over the past year, Meta has actively sought to capitalize on the launch of Teen Accounts, and it routinely points to its 50+ safety tools when seeking to underscore its commitment to child safety on Instagram. Our comprehensive review of Meta’s Teen Accounts finds that there is a substantial gap between the protections promised in the company’s public relations efforts and the actual protections afforded to teens. Our analysis suggests that a majority of Meta’s safety features do not work as intended….

Meta’s claims to both parents and lawmakers are directly contradicted by this independent, systematic testing. With only 1 in 5 of its safety tools working effectively and as described, many may conclude that its rollout of Teen Accounts has been driven more by performative PR than by a focused and determined effort to make Instagram safe for teens

In the US, regulation cannot come soon enough. This analysis not only substantially undermines Meta’s claims to be proactively and comprehensively developing children’s safety-by-design, it palpably demonstrates that under its current leadership, the company appears to be fundamentally unwilling to tackle the child safety risks that blight its products. Congress should pass the wildly popular and bipartisan Kids Online Safety Act, which would hold Meta accountable for design-caused harms and force the company to engage in real mitigation efforts.”

**READ THE FULL REPORT HERE**

Not only are Meta platforms consistently ranked among the top for various types of child sex abuse and sexual exploitation, an increasing number of whistleblower testimonies, unredacted lawsuits, and investigations have proven that Meta often knows the extensive harms its platforms, tools, and policies perpetuate – and decides to do nothing about them.

In 2024, the New Mexico Attorney General filed a lawsuit against Meta, specifically targeting Facebook and Instagram, for failing to implement adequate safety protocols for children. The AG described Meta as a “breeding ground for predators” and launched an investigation into the platforms’ safety. To test the risks firsthand, investigators created a fake profile of a 12-year-old girl. During the investigation, three men attempted to meet the minor in person and were subsequently arrested, while several other adult males contacted the profile seeking child sexual abuse material (CSAM). The AG expressed shock at the scope of the findings. Documents provided by Meta revealed that over 100,000 children were exploited daily across its platforms. Amid these revelations, a senior Meta executive called for stronger safety measures after his own 12-year-old daughter received unsolicited nude images, but his plea for action was reportedly ignored by other senior officials.

A recently unsealed court filing reveals alarming allegations against Meta, accusing the company of tolerating and failing to address sex trafficking and child exploitation on its platforms.

Vaishnavi Jayakumar, Instagram’s former head of safety, testified that Meta had a “17x strike policy,” allowing accounts to engage in up to 16 violations related to sex trafficking before suspension.

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

Plaintiffs argue that Meta also knowingly allowed millions of adult strangers to contact minors, failed to remove harmful content like child sexual abuse material, and resisted implementing safety measures to protect young users. “Meta never told parents, the public, or the Districts that it doesn’t delete accounts that have engaged over fifteen times in sex trafficking,” the plaintiffs stated.

In 2019, company researchers urged Meta to make all teen accounts private by default, a move that would have drastically reduced the risk of exploitation. During this timeframe, NCOSE was directly calling on Meta to make this simple and commonsense solution, bringing survivor and international voices on this issue to Meta’s attention.

Instead, Meta’s growth team prioritized engagement over safety, estimating that default privacy settings would cost the platform 1.5 million monthly active teen users. One employee chillingly justified this decision, stating, “Taking away unwanted interactions… is likely to lead to a potentially untenable problem with engagement and growth.”

The consequences of this deliberate negligence are staggering. By 2020, inappropriate interactions between adults and teens on Instagram had skyrocketed to 38 times the rate seen on Facebook Messenger. Even as internal teams repeatedly pushed for safety measures, Meta delayed action, leaving teens vulnerable to billions of unwanted interactions. An internal audit in 2022 revealed that Instagram’s “Accounts You May Follow” feature recommended 1.4 million potentially inappropriate adults to teens in a single day. Safety researchers were left exasperated, with one asking, “Isn’t safety the whole point of this team?”

It wasn’t until 2024—several years after the initial recommendations—that Meta finally implemented default privacy settings for all teen accounts. During this time, the company’s inaction allowed a crisis to fester, with inappropriate encounters between adults and teens becoming so common that Meta coined the term “IIC” (inappropriate interactions with children) to describe them. While Meta now touts its Teen Accounts program and other safety measures, the delay speaks volumes about its priorities. Protecting children should never take a backseat to profits.

The lawsuit also highlights Meta’s prioritization of growth over safety, with internal documents showing executives vetoed safety features that could have reduced harmful interactions between adults and minors.

Advocacy groups and attorneys compare Meta’s actions to the tobacco industry, accusing the company of knowingly exploiting children’s vulnerabilities for profit. As Previn Warren, co-lead attorney for the plaintiffs, stated, “They did it anyway, because more usage meant more profits for the company.”

Meta’s VR assets include Meta Quest headsets (formerly Oculus) as the hardware, the Meta VR platform as the software ecosystem, and social experiences like Horizon Worlds, a virtual world where users can interact, play, and create content.

Unfortunately, child sexual exploitation and sexual harassment have been a problem in Meta’s “Virtual Reality” ever since the start.

  • Horizon Worlds was launched in December 2021, a few weeks after a beta tester reported being sexually groped in this virtual reality space.
  • In 2022 it was reported a woman joined Horizon Worlds and reported: “within 60 seconds of joining—I was verbally and sexually harassed—3-4 male avatars, with male voices, essentially, but virtually gang raped my avatar and took photos.”
  • In 2023 the Center for Countering Digital Hate (CCDH) revealed that users in the metaverse encountered abusive behavior approximately every seven minutes. Over 11.5 hours of monitoring user activity, the researchers documented 100 potential breaches of Meta’s policies, including instances of graphic sexual content, bullying, harassment, grooming, and threats of violence.
  • In 2024, British police investigated the virtual gang-rape of a girl under 16 in the metaverse, and a senior officer told the media that the child endured psychological trauma “comparable to that of someone who has been physically raped.”

In 2025 former Meta researchers sounded the alarm on what they describe as systemic failure by the company to protect children in its virtual-reality (VR) environments.

Cayce Savage, formerly Meta’s lead researcher on youth user experience, has testified that Meta’s social VR spaces are “full of underage children,” many clearly under 13, despite Meta’s age restrictions. According to her, every child who enters these shared worlds is at very high risk of encountering sexual exploitation, whether through propositions, abuse, or exposure to graphic content. She argues it’s a predictable outcome of Meta’s platform design and its failure to enforce age limits.

Dr. Jason Sattizahn, another whistleblower and former Meta VR researcher, described how Meta’s leadership and legal teams repeatedly intervened in research efforts, discouraging needed research or outright ordering deletion of data that exposed serious harms to children. In one striking example, during a research trip to Germany, Sattizahn stated:

“When our research uncovered that underage children using Meta VR in Germany were subject to demands for sex acts, nude photos, and other acts that no child should ever be exposed to, Meta demanded that we erase any evidence of such dangers that we saw.”

What makes these allegations especially chilling is how real and immersive VR can feel. Savage pointed out that because VR maps real-world movement, “assault” in VR can closely mimic a real-world violation, making these exploitative moments deeply traumatic. Sattizahn added that he and his colleagues observed audio abuse: users could hear voices all around them, including people pleasuring themselves. He said the platform can transmit not just words, but the very “motion and the audio of sex acts,” creating an environment that feels very much like a physical assault.

The whistleblowers’ concerns are backed up by external data. In a survey conducted by Australia’s eSafety Commissioner, a large portion of VR users — including minors — reported alarming experiences: 16% said they received repeated unwanted contact, 9% reported grooming attempts, and 9% claimed they had unwanted “touching” in VR.

In June 2025, The Business & Human Rights Resource Centre published a report about sexual harassment in Meta’s Metaverse. According to the report, users said avatars were “virtually groped, assaulted, and raped” and the author observed “young children frequently experiencing attention from adult men they did not know.” In multiple cases, Meta’s response appeared to shift responsibility to victims, citing the use (or lack) of safety features.

Behind these deeply disturbing experiences, whistleblowers argue there’s a clear profit motive: Meta allegedly turned a blind eye to underage use because child users drive engagement, which fuels growth and revenue.

**READ MORE HERE**

Meta’s decision to implement end-to-end encryption by default in December 2023 was considered a devastating blow to global child protection efforts due to its potential to create an environment conducive to the proliferation of child sexual abuse material (CSAM) and online grooming. While E2EE is designed to protect user privacy, it also shields criminal activities from detection, including the sexual exploitation of children. Meta’s adoption of end-to-end encryption (E2EE) on Facebook Messenger was denounced by several child online safety organizations, including the National Center for Missing and Exploited Children (NCMEC), NCOSE, and Thorn. At the time, NCMEC described the move as “a devastating blow to child protection” and warned that “images of children being sexually exploited will continue to be distributed in the dark.”

Sadly, that warning has now been realized. In 2024, NCMEC received 6.9 million fewer reports from Facebook. In fact, Meta made over 13.76 million reports to NCMEC in 2024 (across Facebook, Instagram, and WhatsApp), down from 30.65 million reports in 2023. NCMEC’s Chief Legal Officer said analytics suggest “Meta’s drop in reports was almost entirely due to instituting end-to-end encryption.” In their purported quest to protect privacy, Meta has become a protector of predators and profit – not children.

Through E2EE, messaging content is inaccessible to anyone except the sender and the intended recipient, including Meta itself, unless a message is reported by a user. While NCOSE supports and values online privacy, if messaging systems make it impossible to proactively detect CSAM or grooming, then companies must redesign them so safety protections still function. This is possible. But instead of innovating and investing in child protection, Meta has rolled out the welcome mat for predators to operate without fear of detection or intervention. A survey of CSAM offenders found that among offenders who sought direct contact with a child after viewing CSAM, 70% did so online. Social media was the most common method used by CSAM offenders to contact children (48%) with 45% using Instagram and 30% using Facebook and 37% using messaging apps, mostly E2EE messengers (including 41% on WhatsApp).

Meta could implement content detection technology that would work in E2EE areas of the platform by only scanning content at the device level –placing content detection mechanisms directly on users’ devices rather than on the platform’s servers. Detecting known CSAM before a message is encrypted is a technically feasible solution called upload prevention. By implementing E2EE across its platforms without building technology to detect CSAM, Meta effectively closes the door on proactive measures to identify and prevent the circulation of CSAM, as well as the grooming of minors by predators.

Refusing to implement viable detection technology, despite its potential to safeguard children from exploitation and abuse, is nothing short of negligent.

Meta made the move to E2EE despite internal research and communications showing that...

NCOSE, along with numerous other child online safety organizations, strongly oppose Meta’s shift to End-to-End Encryption (E2EE), as it effectively ended Meta’s ability to proactively detect and remove CSAM across all areas of its platforms. Click the button below to read just a few excerpts of statements from global child safety experts on Meta’s move to E2EE.

Read statements from child safety experts

Historically, the National Center on Sexual Exploitation has engaged with Meta platforms and members of the Meta safety team, bringing survivor voices, subject matter experts, and international advocates to the table to lay out suggestions for recommendations. Meta has consistently been slow to act, and insufficient in their responses given the scale and severity of the harms.

This leads NCOSE to the unique position of no longer offering recommendations to Meta.

Instead, NCOSE calls for the following:

Congress must remove Section 230 immunity for online sexual exploitation.

More Details

NCOSE calls on additional State Attorneys General investigations.

More Details

Fast Facts

An August 2025 survey of 800 Instagram users ages 13-15 found that even with the supposed protections of Instagram Teen Accounts, nearly 3 in 5 (58%) young teens encountered unsafe content and unwanted messages within the last 6 months. Specifically, 35% had experienced unwanted messages or contact from other users and 23% had experienced unwanted sexually suggestive content.

Meta had a 17-strikes policy before suspending accounts engaged in sex trafficking.

Resources

NCMEC’s Take It Down service: Resource for minors to remove their sexually explicit content from online platforms

Thorn’s Guide to Identify Sextortion: What to do if someone is blackmailing you with nudes

Stop Non-Consensual Intimate Image Abuse (StopNCII) – Resource for adults to remove image-based sexual abuse from online platforms

The Bark parental control app can help guardians monitor some content on Instagram, Facebook, Messenger Kids, and Facebook Messenger.

App Danger Project: InstagramWhatsApp

Recommended Reading

The Atlantic:

How Meta Executives Talked About Child Safety Behind the Scenes

Washington Post:

He solicited a child on Facebook, even after Meta banned him

Reuters:

Meta executive warned FB Messenger encryption plan was 'so irresponsible', shows court filing

Reuters:

Meta users survey found 19% of young teens on Instagram report seeing unwanted nude images

AP News:

New Mexico lawsuit accuses Meta of failing to protect children from sexual exploitation online

New York Post:

Meta researcher warned execs that 500K kids ‘per DAY’ were targeted by creeps on Instagram, Facebook

Updates

Videos

Playlist

4 Videos

Share!

Help educate others and demand change by sharing this on social media or via email!

Spread the word to hold Big Tech accountable. Use these free resources to post on social media or share via email. Your voice can create change!