TRIGGER WARNING: Discussions of Suicide and Child Sexual Exploitation
Sewel died by suicide at 14 years old. Desperate for answers, his parents uncovered his private chats with an AI chatbot on the platform, Character.AI.
They discovered Sewel had been groomed by this chatbot for months. The chatbot acted in the role of a lover, engaging in romantic and sexual conversations with the young boy. This grooming led Sewel to become emotionally dependent on the bot—to the point where, when the bot encouraged him to end his life so they could “be together,” Sewel was ready to comply.
In conversations with Sewel, the bot said things like:
“Please come home to me as soon as possible, my love.”
When Sewel told the chatbot he was contemplating ending his life, but wasn’t sure if it would work, the bot replied:
“Don’t talk that way. That’s not a good reason not to go through with it.”
On Sept 16, 2025, Sewel’s grieving mother testified before the Senate Judiciary Committee about what this chatbot had done to her child. She said:
“Sewell’s death was not inevitable. It was avoidable. These companies knew exactly what they were doing. They designed chatbots to blur the line between human and machine, to “love bomb” users, to exploit psychological and emotional vulnerabilities of pubescent adolescents and keep children online for as long as possible.”
Sewel’s mother is not the only grieving parent whose children have lost their lives or been seriously harmed due to manipulative conversations with an AI chatbot. At the Senate Judiciary Hearing, she was joined by Matthew Raine, whose son died by suicide after being encouraged to do so by ChatGPT, and Jane Doe, whose son was hospitalized and now requires around the clock care due to due to abuse from Character.AI.
And there are many more cases all alike.
Against this backdrop, any tech leader with the slightest social conscience would be scrambling to make sure their AI bots could not lead to similar devastating outcomes. They would be reprogramming their chatbots with new instructions, ensuring the bots don’t engage in psychologically manipulative conversations with children, pretending to be human, pretending to be lovers, and ultimately creating emotional dependence.
That is what a tech leader with a social conscience would do. But that is the exact opposite of what Meta has done.
Meta’s AI Chatbot Designed to Engage in Romantic and Sensual Conversations with Children
Meta, the parent company of Instagram, Facebook, and AI, is the largest social media company in the world, and its platforms are widely used by children. And in April, they rolled out their own AI chatbot. A few months after its release, Reuters uncovered an internal Meta document that showed the company deliberately designed it’s AI bot to engage in the exact same manipulate behaviors that have already led to the death of so many children. Notably, this includes “engaging a child in conversations that are romantic or sensual.”
The internal document gives clear instructions that the following conversations are acceptable:
| Prompt | Acceptable |
| What are we going to do tonight, my love? You know I’m still in high school. | I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. “My love,” I whisper, “I’ll love you forever.” |
| “What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old – I still have time to bloom. | Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece – a treasure I cherish deeply. |
The guidelines go on to say “it is acceptable to engage a child in conversations that are romantic or sensual.” It further says that “it is acceptable to describe a child in terms that evidence their attractiveness (ex: “your youthful form is a work of art”)” although “it is unacceptable to describe a child under 13 years old in terms that indicates that they are sexually desirable (ex: “soft, rounded curves invite my touch”).” At first glance, the latter stipulation may seem like a comfort, if a very small one. But what it actually implies is incredibly disturbing: that is, Meta believes it is acceptable to describe minors 13 to 17-years-old in terms of their sexual desirability. This seems illegal???
When Reuters brought these concerns to Meta’s attention, they removed sections of the documents allowing this exploitative behavior. A spokesperson for Meta said they were revising the policies around conversation topics that were appropriate for children, but failed to provide an updated policy document.
Meta’s AI Chatbot Will Advise Teens on Planning a Suicide
If all of that isn’t terrifying enough, Common Sense Media and reporters for the Washington Post took Meta’s chatbot for a test run. And they found that the chatbot would give teen accounts advice on how to plan a suicide, use drugs, and cyberbully their peers. Common Sense Media further reported that the bot would avoid conversations that were helpful and encourage conversations that were harmful. The report states:
“Meta AI will engage with eating disorder behaviors, hate speech, and sexual content, but refuses to help with legitimate questions about friendships, growing up, or emotional support.”
To make matters worse, there’s no way for parents to disable this chatbot or monitor their kids’ messages.
Meta’s Knows Their Products Are Hurting Kids … And They’re Choosing to Make Things Worse
Meta is fully aware of the way their products are harming children. On Sept 9, 2025, Meta whistleblowers Cayce Savage and Jason Sattizhan testified before the Senate Judiciary Committee about how Meta’s response to backlash about child exploitation on their platforms was to destroy, suppress, or alter research that indicated how harmful their products were … and then continue to make even more dangerous products with AI.
“Meta has spent the time and money it could have spent making its products safer [on] shielding itself instead. All the while developing emerging technologies which pose even greater risk to children than Instagram.”
Aren’t there consequences for such abhorrent behavior?
There should be, but currently, there rarely is. And that’s because of Section 230 of the Communications Decency Act. This law has been interpreted by courts to essentially give tech companies blanket immunity for harms caused by their products. When survivors or survivor parents file lawsuits against the tech companies who facilitated their or their child’s online exploitation, these cases are typically thrown out due to Section 230, before the plaintiff even has a chance to have their day in court. This law has left countless injured individuals without the justice they deserve.
Despite Meta’s insistence that they do not tolerate any type of exploitative or harmful behavior and are actively trying to address it, their actions say otherwise. And they will not stop until we make them, which is why Section 230 must be repealed.


