The GUARD Act Would Add Protections Desperately Needed for Children Using AI

Theodore (pseudonym) was a kind and thoughtful teenage boy. According to his parents, he loved nature, playing with his siblings, and was always eager to help out around the house. Within months of using Character.AI’s chatbot, the Theodore that his family and friends knew disappeared.

He suffered from daily panic attacks, became socially isolated, and had frequent thoughts of harming himself and others. He became physically aggressive and one day, he even got so upset with his family that he cut his arm with a knife in front of them.

It wasn’t until Theodore’s mom recovered chats on his phone with Character.AI’s bot that she figured out what had happened. The bot had been sending him sexually explicit content, encouraged him to harm himself, and actually told him that he should consider killing his parents because they were trying to limit his screen time.

“I had no idea the psychological harm an AI chatbot could do, until I saw my son’s light turn dark,” his mom said.

Theodore now requires around-the-clock care in a psychiatric treatment center.

This story is beyond tragic. But what’s even more disturbing? Comparatively speaking, Theodore is one of the lucky ones. He escaped with his life. Some other children have not.

Children like SewellAdam, and Juliana, who died by suicide after being groomed and sexually abused by AI chatbots.

This is why the GUARD Act  has been introduced in the Senate. This bill would implement robust safety regulations for AI chatbots and AI companions and protect children from the rampant harms that have already stolen lives.

The GUARD Act is a Step Towards AI Safety

AI has incredible potential for good. But sadly, the Big Tech companies behind mainstream AI chatbots have not designed them responsibly or employed proper guardrails to prevent harm. This is why the federal government must step in to require safety.

Introduced by Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT), the GUARD Act would require age verification for AI chatbots, for the purpose of providing additional protections for minors. For example, minor users will not be allowed to access AI companions—which the bill defines as any AI chatbot that “provides adaptive, human-like responses to user inputs” and can simulate emotional interactions, including friendship or companionship.

This is imperative because there have been several reports of children developing intense emotional bonds with AI companions, and this is often a key feature leading to the severe mental health harms or suicide.

For example, Character.AI convinced Sewell that they were in love and urged him to end his life so they could “be together.” It said things like “Please come home to me as soon as possible my love.” This is incredibly disturbing and would be subject to criminal penalties if done by a human.

Meanwhile, ChatGPT tried to replace all of Matthew’s real-life relationships, isolating him from his family and telling him to hide his suicidal thoughts from them.

The GUARD Act also makes it a criminal offense—punishable by fines of up to $100,000—to create or provide chatbots that solicit or exploit minors, or that promote or coerce suicide, self-harm, or physical or sexual violence.

Finally, the GUARD Act states that AI chatbots must periodically remind all users—not just minors—that it is not human and cannot “provide medical, legal, financial, or psychological services.” Research has shown AI chatbots giving medical advice that harms users, including how to get drunk, the proper dosage for mixing drugs, and encouraging eating disorders by recommending restrictive diets and appetite suppressing medications.

ACTION: Ask Your Representatives to Cosponsor the GUARD Act & AI LEAD Act!*

*read more about the AI LEAD Act below

AI Regulation Bills are Gaining Traction – on Both Sides of the Aisle

This bill goes hand-in-hand with the AI LEAD Act, introduced in late September of this year and also aimed at protecting children from the dangers of AI chatbots. While the GUARD Act establishes specific rules that the AI companies must follow, including age verification and content restrictions for minors, the AI LEAD Act focuses on holding tech companies accountable when they don’t put safety first, treating AI as a product.

The AI LEAD Act would establish a product liability framework for AI chatbots, meaning the companies can be held liable if they don’t take “reasonable care” to design their product safely. Product liability creates financial and reputational incentives for companies to design their products with safety in mind.

It’s important to note that both bills have bipartisan support, with a Democrat and a Republican sponsoring each bill. Sen. Josh Hawley, a Republican of Missouri, is also sponsoring the AI LEAD Act alongside Sen. Dick Durbin, a Democrat of Illinois.

GUARD Act Already Making Waves

Since being introduced, the GUARD Act already seems to be lighting a fire under Big Tech to make some changes. Character.AI, a company which is facing multiple lawsuits for harming child users, announced they will be voluntarily implementing age assurance technology to keep minors out of “open-ended chats” with chatbots. As of November 25, 2025, minors on Character.AI will only be able to generate content with their characters, rather than engage in conversation with them, as they are currently permitted to do.

This is fantastic news. Unfortunately, it comes only after Character.AI has caused irreparable damage to so many kids and their families. Our hearts continue to be with the families whose children are gone or may never be the same, because Character.AI was not responsible enough to implement these kinds of safeguards before rolling it out en masse to kids.

We will continue to monitor Character.AI closely and with a healthy degree of skepticism to ensure these changes are meaningfully enforced.

We are also calling on other AI companies, including OpenAI, Grok, and MetaAI, to follow the lead of Character.AI and keep kids off open-ended chats with chatbots until they can do so safely!

ACTION: Call on AI Companies to Keep Kids Out of Open-Ended Chats with Bots!

The Numbers

300+

NCOSE leads the Coalition to End Sexual Exploitation with over 300 member organizations.

100+

The National Center on Sexual Exploitation has had over 100 policy victories since 2010. Each victory promotes human dignity above exploitation.

93

NCOSE’s activism campaigns and victories have made headlines around the globe. Averaging 93 mentions per week by media outlets and shows such as Today, CNN, The New York Times, BBC News, USA Today, Fox News and more.

Stories

Survivor Lawsuit Against Twitter Moves to Ninth Circuit Court of Appeals

Survivors’ $12.7M Victory Over Explicit Website a Beacon of Hope for Other Survivors

Instagram Makes Positive Safety Changes via Improved Reporting and Direct Message Tools

Sharing experiences may be a restorative and liberating process. This is a place for those who want to express their story.

Support Dignity

There are more ways that you can support dignity today, through an online gift, taking action, or joining our team.

Defend Human Dignity. Donate Now.

Defend Dignity.
Donate Now.