Fact or Fiction? The Truth About Section 230 of the Communications Decency Act 

 Sex trafficked by a predator who contacted her on Instagram.  

Coerced into sending sexually explicit images by pedophiles on Kik. 

Groomed by a convicted predator on Snapchat who eventually shows up to her home. 

These are stories of minors who have been exploited at the hands of social media. And in all of these cases, the tech platforms involved faced no accountability for their role in actively facilitating this exploitation. That’s courtesy of Section 230 of the Communications Decency Act. 

Despite the active role that many social media platforms play in facilitating the exploitation of children, Section 230 grants them broad immunity from being held liable.  

With the growing discourse around Section 230, Big Tech is continuing to push fallacies in an attempt to prevent even the slightest reforms to this harmful law. Here are some of the myths they are pushing and why these arguments are void of merit. 

Myth #1: Repealing Section 230 would infringe on freedom of speech.

“The First Amendment, not Section 230, is the basis for our nation’s free speech protections and those protections will remain in place regardless of what happens to Section 230,” said Congressman Frank Pallone at a hearing on Section 230 last May.  

This point cannot be emphasized enough. Free speech will not suddenly disappear if Section 230 is repealed—we still have the First Amendment.  

Further, it is worth pointing out that the First Amendment prohibits the government from censoring speech; it does not apply to private companies (such as social media companies). The text of the First Amendment reads, “Congress shall make no law abridging the freedom of speech (emphasis added).”  

So, while the First Amendment is often at the center of the debate around content moderation on social media, the truth is it’s completely irrelevant. For better or for worse, social media platforms are already legally permitted to censor users. Neither the First Amendment nor Section 230 ever prohibited them from doing this.  

Using free speech as an argument to defend the spread of harmful content on social media is a red herring. It serves to distract the public from the fact that Big Tech could moderate illegal and exploitative content, but instead uses it to make money.  

Myth #2: Reforming or repealing Section 230 will inhibit technological innovation, effectively “breaking” the Internet.

Big Tech and other parties who oppose Section 230 reform often argue that repealing this law would set up an untenable situation where tech platforms would have to moderate content perfectly in real-time, or else be held liable for any harmful content that slipped through the cracks. They argue this would stifle innovation by deterring entrepreneurs from the tech industry due to constant liability fears. 

But let’s be clear: removing blanket immunity does not automatically equal liability. It simply means that, like any other industry, tech companies can be sued in an attempt to establish liability, if a reasonable cause of action exists. For example, if negligence or recklessness on the part of the company led to the occurrence of an injury, the company could potentially be sued. Tech companies who truly perform their due diligence need not fear liability. 

Requiring the tech industry to invest in safety precautions and factor in liability risk to their business models and product design will not break the internet, just like it has never broken any other industry. This is the most profitable domain on the planet, in the history of the world, and is only growing. Big Tech must have a legal duty to ensure that their platforms, with ever growing integration into the most personal aspects of our lives, be safe for users, particularly the most vulnerable ones. Our lawmakers must require them to abide by a duty of care. 

It is a shocking bit of propaganda to suggest that the Internet, the most powerful and influential realm on the planet, cannot withstand liability and deserves more protections than any other industry. Not to mention, more protections than entire generations of children. 

Myth #3: Without Section 230, Tech will be inundated with frivolous lawsuits.

Many in the tech industry argue that without Section 230 in place, they will face an overwhelming number of frivolous lawsuits. But as Dani Pinter, Senior Vice President at NCOSE, said in on this episode of the Ending Sexploitation Podcast, “It’s impossible to say [frivolous lawsuits] would overwhelm tech companies, which are the biggest, richest, and most powerful companies in the world.” 

Just like companies in any other industry have to deal with liability, certainly the world’s most profitable industry should have to as well.  

From a legal standpoint though, we have laws penalizing frivolous lawsuits. In federal courts, FRCP 11(b) prohibits attorneys from filing documents supporting frivolous lawsuits. Violations can lead to sanctions against the attorney and sometimes the client. States have similar rules in their civil procedures. 

Repealing Section 230 would not cause a flood of frivolous lawsuits—it would simply allow legitimate lawsuits to proceed. Right now, such legitimate lawsuits are often halted at the very first step by Section 230 immunity. Survivors are not even allowed their day in court. 

No other industry enjoys the protections of Section 230, and yet, they are not crippled by frivolous lawsuits. 

Myth #4: Defending lawsuits is costly, so without Section 230, tech companies will likely remove any controversial speech to avoid legal battles. 

This assumption that all or even most platforms would err on the side of removing controversial content to avoid liability is simply not correct. The main reason that platforms currently do not remove harmful content is because it is profitable. The business model for most of these popular platforms is based on user engagement—and user engagement increases when the content is more controversial. 

For most companies, their profitability is based on volume of content, volume of eyeballs on content, and user engagement, so they can sell ad space and user data to advertisers. This is why platforms loathe removing content, as everything that gets removed is decreasing engagement, clicks, and views on the platform. Therefore, it is not a foregone conclusion that the cost of potential liability will always be more than the cost of broad, overly cautious removals. 

Platforms will need to invest more in moderation and be more cautious, which is a good thing, and actually creates more space for meaningful discussion, rather than rewarding the most extreme views and the most shocking content. 

Myth #5: Tech companies shouldn’t be held responsible for the bad actions of people who use their products or services. 

Just like kitchen knife manufacturers aren’t held responsible if someone uses a knife to stab somebody, removing Section 230 immunity does not automatically mean that courts will treat tech platforms any different. It all comes down to the judicial interpretation of the law. Repealing Section 230 means that these cases can actually be litigated, rather than being dismissed in the earliest stages, before the plaintiffs even have a chance to show what the tech company has done wrong (i.e. before they get to the “discovery” stage).  

Often, tech companies are active, not passive in their facilitation of harm. Algorithms that promote harmful content, that match children with adult strangers, that allow terrorists to find and associate with each other, are just some examples of tools that platforms know are facilitating the harmful conduct itself. When a platform knows it is facilitating and even profiting from extreme harm, and causing damage to its users, should it not be liable? 

In any case brought against an online platform, the facts will matter. What the platform knew and what the platform did would determine if they are liable or not. Removing the protections of Section 230 means only that victims will have their day in court. It does not mean they will win.  

Myth #6: Repealing Section 230 will hurt startups.

Some believe that ending Section 230 will hurt startup tech companies because they do not have the same resources as large tech companies to to invest in safety features and defend against lawsuits. 

However, the truth is Section 230 is actually hurting startup companies. This is because Big Tech has used it to argue immunity against federal antitrust claims, effectively allowing them to engage in anticompetitive conduct. While this was not the intention of Section 230, Big Tech abuses the law in this way, the same way it abuses Section 230 to get away with sexual exploitation. Thus, if Section 230 were repealed, competition would thrive more freely.  

The federal government also already regulates tech companies through the Digital Millennium Copyright Act. This Act requires websites to remove copyrighted material, and it applies to both small and large companies in the online sphere. If companies can effectively monitor and remove copyrighted information, then the technology exists to monitor for illegal and exploitative material (such as child sexual abuse material). 

Myth #7: Tech companies can’t possibly be expected to keep track of millions of users and trillions of posts. 

The goal in repealing it is not to force tech companies to have employees manually review every single post—it is to incentivize tech companies to design safer products and invest in safety tools. 
 
Consider an analogy to car manufacturers. We don’t expect car manufacturers to monitor their customers on the road and ensure they’re driving safely. But we do expect them to install seat belts, airbags, functioning brakes, and the like. If car manufacturers design products that fail to meet established safety standards, the company can be held liable. The same should be true of tech companies.  

Unfortunately, court misinterpretations of Section 230 have gone so far as to immunize dangerous and negligent product design. That is what has to change.  

Further, at the end of the day, the size of the company is not an excuse. If a massive restaurant chain like McDonald’s suddenly started having food poisoning issues and tried to blame it on the size of their chain, we wouldn’t accept that. We expect that, if they want to serve millions of people, they invest enough money to do so safely.  

Some may counter this point with the fact that a company like McDonald’s is making more money per customer interaction, so they can afford to hire more people to accommodate a large consumer base. Many say it’s not financially feasible to hire enough content moderators to account for the number of users on an internet platform.  

However, this logic is based on a misunderstanding of content moderation actually works. Moderation would involve automated tools as well as human review. Removing Section 230 immunity does not mean that a human being would need to review every single post uploaded. Automated tools would flag high-risk or user-reported posts, which moderators would review with additional tools. 

This kind of moderation is already happening to a degree on some platforms, so we know it’s possible and practical. But it is an investment and no platform is investing enough, if at all, because why would they? How can they justify the expense when the worst that can happen to their bottom line if they ignore harmful activity is bad press? They cannot be sued.  

Sadly, for most tech companies, the suffering of children is not enough motivation to voluntarily spend money on safety, when they can just shrug and apologize instead, and continue risk-free with their current practices.  
 
We must ACT NOW to ensure tech companies are held accountable to prioritizing safety!  

ACTION: Call on Congress to Repeal Section 230 of the Communications Decency Act!

The Numbers

300+

NCOSE leads the Coalition to End Sexual Exploitation with over 300 member organizations.

100+

The National Center on Sexual Exploitation has had over 100 policy victories since 2010. Each victory promotes human dignity above exploitation.

93

NCOSE’s activism campaigns and victories have made headlines around the globe. Averaging 93 mentions per week by media outlets and shows such as Today, CNN, The New York Times, BBC News, USA Today, Fox News and more.

Stories

Survivor Lawsuit Against Twitter Moves to Ninth Circuit Court of Appeals

Survivors’ $12.7M Victory Over Explicit Website a Beacon of Hope for Other Survivors

Instagram Makes Positive Safety Changes via Improved Reporting and Direct Message Tools

Sharing experiences may be a restorative and liberating process. This is a place for those who want to express their story.

Support Dignity

There are more ways that you can support dignity today, through an online gift, taking action, or joining our team.

Defend Human Dignity. Donate Now.

Defend Dignity.
Donate Now.