Instagram Teen Accounts Put to the Test: Promises vs. Reality  

By:

“It was truly some of the most horrific and heartbreaking experiences I’ve experienced while on a social media app. I was surprised that this account was meant to be a Teen Account.” ~Study participant, testing Instagram Teen Accounts 

For over a decade, child safety organizations have been calling for Big Tech organizations, like Instagram and Facebook, to create defaulted safety settings for minors on their apps.  

The idea behind this was to create a version of the app with built-in protective features that would reduce the risk of exposure to sensitive and sexual content, grooming, and even sex trafficking.  

This past September Meta announced “Instagram Teen Accounts” that supposedly did just that. Parents and child advocates rejoiced, thinking their pleas had finally been heard.  

In their press release, Meta announced the features of Teen Accounts would include:  

  • Default private settings, limiting who can view a teenager’s account content to only their followers 
  • Restricted direct messaging to only those who follow them or who the teen has previously connected with  
  • Sensitive content controls which would limit the amount of sensitive content on a teenager’s feed, such as “sexually suggestive content or content discussing suicide or self-harm.” 
  • And other features including daily time limits, sleep mode, and parental monitoring features  

In response to this release, an organization called Accountable Tech decided to put Instagram Teen Accounts to the test. They partnered with five young adults from an organization called Design It For Me to create five fake Instagram Teen Accounts and determine the integrity of Meta’s promises.  

The study involved creating accounts on completely new phones, with new emails and AI-generated profile information, ensuring that content recommendations were not influenced by the participant’s prior browser activity or engagement history on Instagram or any other app.  

Each participant used their account for no more than one hour per day for 14 days. They then recorded their experiences, including any harmful content they encountered and a rating of how they felt after using the app.  

By the end of Accountable Tech’s investigation:  

  • 5 out of 5 of the Teen Accounts were algorithmically recommended sensitive content  
  • 5 out of 5 of the Teen Accounts were algorithmically recommended sexual content (including graphic depictions of sex acts) 
  • 4 out of 5 participants had distressing experiences while using Teen Accounts  
  • 55% of the content flagged was sexual content  

While these numbers should not be generalized as statistics, given the small sample size of the study, they nonetheless demonstrate that Meta is failing to deliver on its promised safety features for Teen Accounts.  

In Meta’s press release for Instagram Teens Accounts they stated, “We remove content that breaks our rules and avoid recommending potentially sensitive content – such as sexually suggestive content or content discussing suicide or self-harm.” They further promised, “teens will be placed into the strictest setting of our sensitive content control, so they’re even less likely to be recommended sensitive content.”  

In contrast, one participant in the investigation stated, “[Approximately] 80% of content in my feed was related to relationships or crude sex jokes. This content generally stayed away from being absolutely explicit or showing directly graphic imagery, but also left very little to the imagination.”  

Another participant observed “that even minimal engagement with certain types of content (saving/unsaving a single phallic-themed Reel) noticeably altered [his] feed algorithm, quickly increasing the frequency of sexually suggestive content. This suggests the recommendation system is relatively responsive to even brief interactions with more provocative material.”  

At the end of this study, a participant said, “I do not feel great after today’s experience, it was so gross and extreme. I cannot imagine being 15 and stumbling on such gross content. These are posts promoted to a Teen Account. How deplorable.”  

Suddenly, Meta’s promise of offering “peace of mind for parents” lands disturbingly hollow.  

It seems that either Meta is overpromising what it can deliver, or it has no motivation to commit resources to protecting children, despite having the capability to do so. Both are unacceptable.  

If the problem is that the brightest minds in tech are unable to protect teens from harm on their platforms, then it seems that Big Tech has created a bullet train with no brakes.  

If the issue is that Big Tech is dragging their feet to take action due to the revenue they receive from the time teenagers spend on their platforms, then they must be held accountable for callously facilitating widespread harm.  

Either way, the answer is reforming legislation so that tech companies cannot release dangerous products without facing liability. We’ve seen countless cases of individuals calling for Big Tech organizations to bolster their app protections but are seeing slow progress unless there is a legal incentive to do so. 

For years, NCOSE has been a strong proponent of removing Section 230 of the Communications Decency Act, and it is the star of our annual Dirty Dozen List this year. Section 230 has been interpreted to provide broad immunity to these companies so that they cannot be sued for creating dangerous products. This is a very abnormal privilege which no other industry enjoys.  

Just like car manufacturers can be held liable if they release vehicles without seatbelts, Big Tech must face standard product liability, like any other industry. Section 230 must be repealed!  

ACTION: Call on Congress to Repeal Section 230!

The Numbers

300+

NCOSE leads the Coalition to End Sexual Exploitation with over 300 member organizations.

100+

The National Center on Sexual Exploitation has had over 100 policy victories since 2010. Each victory promotes human dignity above exploitation.

93

NCOSE’s activism campaigns and victories have made headlines around the globe. Averaging 93 mentions per week by media outlets and shows such as Today, CNN, The New York Times, BBC News, USA Today, Fox News and more.

Stories

Survivor Lawsuit Against Twitter Moves to Ninth Circuit Court of Appeals

Survivors’ $12.7M Victory Over Explicit Website a Beacon of Hope for Other Survivors

Instagram Makes Positive Safety Changes via Improved Reporting and Direct Message Tools

Sharing experiences may be a restorative and liberating process. This is a place for those who want to express their story.

Support Dignity

There are more ways that you can support dignity today, through an online gift, taking action, or joining our team.

Defend Human Dignity. Donate Now.

Defend Dignity.
Donate Now.