For the past five weeks, Big Tech has been put under the microscope like never before. The world is watching as tech executives like Meta’s CEO, Mark Zuckerberg; Head of Instagram, Adam Mosseri; and YouTube executives testify before a jury for the first time ever, about the addictive nature of their social media platforms.
This case is the first of many bellwether cases regarding social media harms set to go to trial in Los Angeles. The plaintiff, K.G.M. (also known as Kaley), alleges that platforms like Instagram, YouTube, Snapchat, and TikTok are intentionally designed to hook users, specifically teens and children. From the age of 8, Kaley was watching YouTube, then she joined Instagram by age 9, Musical.ly (now TikTok) at age 10, and Snapchat at age 11. Kaley says the addictive nature of these platforms caused her to experience anxiety, depression, and body dysmorphia. Snap and TikTok both settled with the plaintiff before the trial started.
Now, tech executives must take the stand and defend their practices that have inflicted harms on users since their inception. Is this finally the reckoning that victims of social media’s harms have been waiting for?
That’s for a jury to decide, but here’s what we know so far.
Meta Knew Its Beauty Filters Encouraged Eating Disorders Among Teen Girls
While on the stand, Mark Zuckerberg was presented with evidence that showed all 18 of Meta’s internal experts agreed that beauty filters, which let users digitally alter their appearance to mimic cosmetic procedures—such as facelifts and nose jobs—can cause young girls to experience body dysmorphia and eating disorders. Kaley alleges that this is exactly the impact those filters had on her.
Margaret Stewart, a senior Meta staffer and mother of two daughters, sent an email to Adam Mosseri and others, urging them to support a ban on these beauty filters, which Meta knew were primarily being used by teen girls and were causing serious harm.
While this ban was initially enacted, it was later reversed as a Meta executive stated that banning the filters would “limit our ability to be competitive in Asian markets.”
Mosseri responded in support of loosening the ban by allowing users to access the filters, but not actively recommending them. An internal memo presented to leadership laid out the two options:
Option 1: Continue the temporary ban. Pros: Mitigate well-being concerns, no PR/regulatory risk. Cons: Limits growth.
Option 2: Lift the ban but remove filters from recommendation surfaces. Pros: Lower impact to growth. Cons: Still notable well-being risk.
Mosseri chose option 2. Even other high up employees at Meta warned against removing the ban on these filters. Nick Clegg, president of Global affairs said it would be “a very unwise thing to do” and that they would “rightly be accused of putting growth over responsibility.”
Yet this is the exact decision Meta made. While testifying, Zuckerberg confirmed,
“What we allowed was letting people use those filters if they wanted but deciding not to recommend them to people.”
Data Showing Emotional Impacts of Social Media was Deleted
Meta conducted a survey known as the “Bad Experiences and Encounters Framework (BEEF),” in which they questioned 269,000 Instagram users about their negative experiences. Internal emails show Meta researchers were instructed to delete data in response to the survey question, “How bad does [Instagram] make you feel?”
An internal message from a Meta researcher said:
“BEEF asks a question about emotional impact. But I was told I need to delete that data. We can’t analyze it … For policy/legal reasons, I was told we need to delete the data and not analyze it. We’re not allowed to ask about emotions in surveys anymore.”
Shockingly, Mosseri admitted he had never even read the BEEF survey in its entirety.
NCOSE Social Media Manager Caroline Callicutt joins @NTDNews to discuss the latest developments in the social media trial accusing Meta of deliberately designing Instagram features to harm and addict children. Watch the clip. ⬇️https://t.co/GNa6rOxubm#ProtectKids…
— National Center on Sexual Exploitation (@NCOSE) February 20, 2026
Lack of Age Verification and Intentional Targeting of Young Children
On paper, the age requirement for Facebook and Instagram is 13 years old. But internal Meta documents tell a different story. Documents shown in court revealed that Instagram monitored online behavior of children as young as 8. One internal Meta document from 2015 showed that an estimated 30% of all U.S. children ages 10 to 12 were using Instagram. Another one showed the company had a goal of increasing time spent on Instagram by 10-year-olds.
When presented with this evidence, Zuckerberg responded: “I don’t remember the context of this email from more than 10 years ago.”
Further, despite Meta’s purported age-gating, lawyers for Kaley found an internal document from 2018, stating, “If we wanna win big with teens, we must bring them in as tweens.”
Again and again, Zuckerberg denied giving employees goals to increase time users spend on the platform. However, an email from 2015, showed the Meta CEO stating that his goal for 2016 was to increase users’ time spent by 12%
Adam Mosseri Denied Addictive Nature of Social Media
When Adam Mosseri, Head of Instagram, was put on the stand, he repeatedly denied that a person can become “addicted” to social media. He said there can be cases in which a person engages in “problematic use” of social media.
He also likened it to being hooked on a TV show.
“I’m sure I’ve said that I’ve been addicted to a Netflix show when I binged it really late one night, but I don’t think it’s the same thing as clinical addiction.”
However, the plaintiff’s attorney, Mark Lainer, presented an interview Mosseri did with NBC News in 2020, where he acknowledge that social media addiction is real:
“I don’t know that I have a good name for it. Maybe I probably should, now that you say that. But I think that problematic use, for that, whether or not you call it addiction, I think that’s probably reasonable to call it.”
Meta’s Platforms are More Powerful than Parental Controls
Project MYST (Meta Youth and Social Emotional Trends), for which Mosseri approved the funding, found that Meta’s purported parental controls are no match for the platform itself, and in fact, don’t impact attention or time spent on the platforms.
Further, their research found that teens who had experienced more traumatic life events were less capable of regulating their own screen time.
What did Mosseri have to say for this? “We do lots of research projects. I don’t, I apologize, remember this specific study.”
YouTube Executive Suggests 5-6 Hours of YouTube Per Day is Good for Kids
According to reporting by Scrolling2Death’s Nicki Petrossi, who has been bringing real-time updates from inside the courtroom for the duration of this trial, YouTube Executive, Cristos Goodrow, testified that his children watch 5 to 6 hours of YouTube per day. He insisted that YouTube has been very good for his kids.
Later on, he said that he is not aware of any children that watch YouTube without their parents. Consequently, he was asked if this means he watches 5-6 hours of YouTube with his kids each day. Incredulously, Goodrow replied yes.
However, internal documents from YouTube tell another story. According to Petrossi, Kaley’s attorney presentend an internal document that showed the company’s intention was to have the YouTube function as a “babysitter” so that parents can leave their child on their own with the app to be entertained.
Further, Petrossi says numerous internal documents were brought forth which showed YouTube intentionally designed its platform to be addictive, despite Goodrow’s denial of these facts.
The Kids Online Safety Act is Vital to Mitigating Social Media Harms
If this trial has shown us anything, it is that Big Tech has consistently lied to the American people about their practices. While their company policies and public relations say one thing, internal documents reveal the truth: Profits will be prioritized over safety every time.
Now, it’s time to compel them to change. Congress must pass the Kids Online Safety Act to ensure platforms are designed with safety in mind. Right now, dangerous algorithms plague these platforms, designed to hook users and promote content that is detrimental to a user’s mental health, especially children.
Kaley’s trial is only the beginning. There are 1,600 other plaintiffs awaiting justice. The time to act is now!

