Imagine the horror of discovering that child sexual abuse material (a.k.a. “child pornography”) of your precious little one had been posted on an online platform.
Imagine contacting the tech company, begging them to remove the material from their platform. Imagine your horror when the company fails to do so. Your despair when, even when some of the images are removed, they are simply reuploaded again later.
Finally: Imagine your shock, outrage, and utter hopelessness when the courts finds this tech company is immune from all responsibility for what happened.
For numerous parents represented in the lawsuit Does 1-6 v. Reddit, this is no mere imagination exercise. It is their reality.
Six families were denied justice against Reddit in this lawsuit. And the case echoes multiple others where survivors, children, and parents have fought in the courts against the tech companies who facilitated exploitation, only to be denied even their day in court.
What is standing in the way of justice for these exploited individuals? Section 230 of the Communications Decency Act.
Rampant Harms Caused by CDA 230 Prompt “Sunset” Bill
Section 230 of the Communications Decency Act, or “CDA 230” for short, has been utilized by corporations to avoid being held liable for the harms they proliferate. While not initially intended to have such broad application, CDA 230 has been interpreted by courts as giving tech companies near blanket immunity for any harms caused by their platforms.
CDA 230 has become the greatest enabler of sexual exploitation of the century—and Congress is finally recognizing it! This is why they have introduced the bipartisan, “Sunset 230” bill. If passed, the sunset bill would render CDA 230 inactive after December 31st, 2025. During this time, tech companies would need to work with Congress to develop a better legislative solution, or else simply lose CDA 230 protections entirely.
Deceptive Backlash Against Sunsetting CDA 230
Since the introduction of the Sunset 230 bill, there has been predictable backlash from tech lobbyists and CDA 230 die-hards in the media. Two notable examples are:
- A Teen Vogue article by the Activism Director of Electronic Frontier Foundation (EFF), a tech lobby group that is always the main opposition in efforts to pass child protection bills.
- A Wall Street Journal article by the authors of CDA 230, former Representative Christopher Cox and Senator Ron Wyden.
These articles state that sunsetting 230 would harm children, would kill the Internet, and a number of other ludicrous claims. We address these below.
Myth: Sunsetting CDA 230 Will Harm Children
The Teen Vogue article argues that sunsetting CDA 230 would harm children by taking away their online spaces:
“Again and again, young people say online spaces are their ‘lifelines.’ Section 230 is the law that powers and protects those lifelines. Why would legislators want to change that?”
This is smoke and mirrors.
Sunsetting CDA 230 will not take away kids’ online spaces. It will simply make those spaces safer for children. It is true that the Internet has some benefits to children—we have never pretended otherwise. But it is equally true that the Internet threatens children with devastating harms. The goal of sunsetting 230 is to maximize the benefits of the Internet while minimizing the harms.
Teen Vogue, meanwhile, paints an extremely one-sided picture which only discusses the benefits of the Internet to children, without acknowledging the harms—harms that would never have reached the epidemic proportions of today without CDA 230 in place. Ignoring the harms of CDA 230 means ignoring the children’s lives that have been destroyed or taken too soon.
Teen Vogue urges Congress to “listen to what young people are actually saying.” Alright. Let’s listen to what all young people are saying.
What about the young people who showed up at a Congressional hearing with Big Tech CEOs, begging Congress to pass laws that would hold these tech companies accountable to prioritizing their safety?
What about the children who showed up at this hearing only in the form of photographs held up by their grieving parents… because these children’s lives ended due to the harms they encountered online?
Teen Vogue callously ignores all of this.
Teen Vogue Relies on Dubious Research from Tech Lobbyists
The main source Teen Vogue uses to support its claims is an article about an EFF survey. Leaving aside the obvious bias of EFF being the biggest lobby group on behalf of Big Tech, there are a number of things that make the source suspect.
First, the EFF survey supposedly shows that kids think that the Kids Online Safety Act (KOSA) would harm them, but KOSA doesn’t touch CDA 230 at all, so this is an entirely different issue.
Second, the article doesn’t link to the actual survey—the links which appear to direct to the survey simply leads back to the article itself.
Third, the article doesn’t mention actual statistics or percentages at all; it does not actually show that the majority of young people (or even a substantial percentage of young people) oppose CDA 230 or the Kids Online Safety Act. Many youth organizations have strongly advocated for the Kids Online Safety Act and greater tech accountability.
There are many actual statistics we could point to in which young people have expressed that social media has had a negative impact on them.
For example, Instagram’s own research found that:
- more than 40% of Instagram teen users in the U.S. or U.K. who reported feeling “unattractive” said the feeling began on the app
- one in eight users under 16 experienced unwanted sexual advances on Instagram just in the last week
- one in five teens say that Instagram makes them feel worse about themselves
- among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram
Thorn, a prominent child safety organization, has also done extensive research in which they solicit youth perspectives on social media. They found that
- 84% of teens believe the likelihood of an adult attempting to befriend and manipulate a minor online (i.e. online grooming) is at least somewhat common. This percentage is even higher among LGBTQ+ teens, standing at 91%.
- 29% of minors reported having an online sexual interaction with someone they believed to be an adult
- 1 in 3 LGBTQ+ teens have experienced online bullying, and nearly 1 in 8 of all teens have experienced it
- 1 in 3 teens have been sent unsolicited sexually explicit imagery
- 1 in 4 minors have seen nonconsensually reshared self-generated child sexual abuse material
And so much more. It is clear that Teen Vogue and EFF are not representing the voices of young people as thoroughly as they claim.
Myth: Sunsetting CDA 230 Will “Kill the Internet”
In their Wall Street Journal piece, CDA 230 authors Cox and Wyden fear monger about how sunsetting 230 will “kill the Internet”.
“Reverting to this pre-Section 230 status quo would dramatically alter, and imperil, the online world,” Cox and Wyden write. “Most platforms don’t charge users for access to their sites. In the brave new world of unlimited liability, will a website decide that carrying user-created content free of charge isn’t worth the risk? If so, the era of consumer freedom to both publish and view web content will come to a screeching halt.”
Firstly, it is blatantly misleading to imply that, because tech companies host user-created content for “free,” it isn’t worth the risk of liability. User-created content is highly profitable as it allows tech companies to sell ad space and user data. The business model for most popular tech platforms is based on user engagement. More content = more views, more clicks = more money. User engagement actually increases for controversial content, so there is especially high incentive for tech companies to keep this content on their platform and not limit users by implementing costs to access the platform. Therefore, it is far from a foregone conclusion that the cost of potential liability will always be more than the cost of broad, sweeping content removals or bans.
It is important to emphasize that removing the blanket immunity granted by CDA 230 does not automatically equal liability. It does not mean that they will be held liable for any harmful content that might slip through in spite of tech companies’ best efforts to keep their platforms safe. It simply means that, like any other industry, tech companies can be sued in an attempt to establish liability, if a reasonable cause of action exists, such as if negligence on the part of a company led to an injury. Tech companies who truly perform their due diligence need not fear liability.
Requiring the tech industry to invest in safety precautions and factor in liability risk to their business models and products design will not break the Internet, just like it has never broken any other industry. The Internet is the most powerful and influential realm on Earth; the tech industry is the most profitable industry in the history of the world and is only growing. It is shocking bit of propaganda to suggest that it cannot withstand standard liability and deserves more protections than any other industry—not to mention more protections than the children and vulnerable users it is actively harming.
Myth: Sunsetting CDA 230 Will Disincentivize Tech Platforms from Moderating Harmful Content
Both the Teen Vogue and the Cox & Wyden articles claim that CDA 230 is necessary to allow tech platforms to moderate harmful content. This could not be farther than the truth.
It is true that the original intention of CDA 230 was to protect tech companies’ good faith efforts to moderate harmful content (you can read more about this here). However, since then, court misinterpretations of CDA 230 have created the opposite situation. The near blanket immunity that tech companies enjoy provides them with no incentive to moderate harmful content.
On the flip side, allowing tech companies to be sued for creating dangerous products—like any other industry—will create real incentive for corporations to prioritize safety of their users by moderating harmful content. Implementing safety precautions is how they will mitigate liability risks. Would car manufacturers have developed and implemented seatbelts and airbags for every car if they could never be sued or regulated? Of course not. Liability makes products safer.
Section 230 is no longer fostering a healthy Internet ecosystem; it is being used in bad faith to avoid any consequences for extracting maximum profit. Sunsetting CDA 230 does not threaten content moderation, the existence of the Internet, or the wellbeing of young people. Rather, it prioritizes humanity over profit and will create a healthier Internet that is safer for all users.
Learn more about the harms of CDA 230 and why reform is necessary here.
ACTION: Call on Congress to Reform CDA 230!
Please take 30 SECONDS to complete the quick action below! Your voice does make a difference!
By: Lily Moric and John Tuason