A previously obscure section of a decades-old law has become a flashpoint in the debate on the role major technology companies play in American society, politics, and policy over the past several years. Section 230 of the Communications Decency Act, passed by Congress in 1996, established important legal principles for the rapid growth of the internet:
- First, the law said an “interactive computer service” provider hosting multiple users (i.e., a social media platform like Facebook or Twitter) is not considered the “publisher or speaker of any information” provided by someone else on the service; in other words, Facebook or Twitter are not considered the “publisher” of a post or tweet shared by a user on their platform.
- Second, these interactive computer service providers can “voluntarily...restrict access to or availability of” certain material and not be held legally liable for such restrictions, provided the content is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” and such restrictions are made “in good faith”; in other words, Facebook or Twitter can remove a post they deem obscene and not be sued for doing so.
The former principle allows service providers to host dozens or thousands or even millions of users on their platforms - and the billions of interactions, conversations, and comments that flow from these users - without fear for being legally responsible for most content. (There are important exceptions; for example, a 2018 law established that service providers are liable for sex trafficking content.)
The latter principles gives platforms the ability to remove content that, while constitutionally protected, may be offensive or harmful to a large number of users. It effectively allows platforms to make their websites or apps safer for all involved.
Sen. Ron Wyden (D-OR), one half of a bipartisan pair of lawmakers who created Section 230, recently described these two principles as “a sword and a shield”:
Essentially, 230 says that users, not the website that hosts their content, are the ones responsible for what they post, whether on Facebook or in the comments section of a news article. That's what I call the shield.
But it also gave companies a sword so that they can take down offensive content, lies and slime — the stuff that may be protected by the First Amendment but that most people do not want to experience online.
As NTU has written before, Section 230 was critical to the growth of a thriving, democratic internet, which in turn has delivered countless benefits to both consumers and taxpayers. Services NTU and its supporters use every day - Google for research, email for contacting others at work and at home, social media for gathering news and information - would not be as helpful, innovative and free as they are today without Section 230.
Sen. Wyden says it even better:
Without Section 230, sites would have strong incentives to go one of two ways: either sharply limit what users can post, so as to avoid being sued, or to stop moderating entirely, something like 8chan — now operating under the name 8kun — where anonymous users can post just about anything and speech supporting racism and sexism is common.
Unfortunately, Republican lawmakers, Democratic lawmakers, and President Trump have proposed gutting Section 230 over the last few years. Many of these proposals do not come from a sober discussion about the benefits and drawbacks of Section 230, but instead from conservative claims about tech company bias against their viewpoints or liberal claims that American tech companies are too big.
The latest proposal comes from some heavy hitters in the U.S. Senate - Commerce Committee Chairman Roger Wicker (R-MS), Judiciary Committee Chairman Lindsey Graham (R-SC), and Sen. Marsha Blackburn (R-TN). Their bill, Online Freedom and Viewpoint Diversity Act, purports to “clarify the original intent of the law and increase accountability for content moderation practices.” The authors each claim that their concern is alleged bias against certain political viewpoints on tech platforms.
The bill text, though, would end Section 230 as we know it. The bill:
- Removes interactive computer service immunity from being treated as “publisher or speaker” for “any decision or agreement made or action taken by a provider or user of an interactive computer service to restrict access to or availability of material provided by another information content provider.”
- Would only apply liability protections against removal of content an interactive computer service “has an objectively reasonable belief is...obscene, lewd, lascivious, filthy, excessively violent, harassing, promoting self-harm, promoting terrorism, or unlawful.” This is a much higher standard than current law, which offers services flexibility on objectionable content determinations (“considers to be” rather than “has an objectively reasonable belief”) and offers a catch-all category able to adapt to changes in content and standards over time (the “otherwise objectionable” standard).
- Creates more avenues for services to be considered “information content providers” subject to civil liability, including “any instance in which a person or entity editorializes and substantively modifies the content of another person or entity.”
The changes made by the Wicker-Graham-Blackburn bill raise several questions, given the lawmakers do not define several key terms in the legislation:
- Would services be treated as “publisher or speaker” if they “restrict access to or availability of material” that they classify as misinformation? What of objectionable or disturbing content that doesn’t clearly fit into one of the nine categories lawmakers create in this new law? If a service makes an editorial note that a piece of misinformation is, indeed, a piece of misinformation, or restricts access to it, are they then liable as an “information content provider” for “editorializ[ing]” or “substantively modif[ying]” the content?
- How do the lawmakers propose to define a “decision,” “agreement,” or “action” that would make a service provider a “publisher or speaker” of content? The bill does not define these terms, and absent any definitions they are subject to broad legal or regulatory interpretations.
- The bill changes the standard for protected removals of content, by stating the provider must have an “objectively reasonable belief” that the content is eligible for removal under Section 230. What is objective? What is reasonable? The lawmakers do not define these terms and, again, absent any definitions they are subject to broad legal or regulatory interpretations.
- The bill says that services would be treated as “information content providers,” subject to greater legal liability, if they editorialize or substantively modify content. What does it mean to editorialize or substantively modify content? The lawmakers, again, do not define these terms.
Absent clear definitions, the lawmakers risk creating even more confusion than under current law. Confusion could spur scores of lawsuits, which would hamper innovation and growth at large technology companies but could destroy small- and mid-sized companies (and would-be competitors to the giants).
To be clear, even with clear definitions this law would be harmful, because it destroys the “sword and shield” that Sen. Wyden and former Rep. Chris Cox (R-CA) created with Section 230. The bill text, though, indicates that even the sponsoring lawmakers themselves haven’t thought through the implications of their far-reaching legislation.
The internet will no doubt help American consumers and taxpayers through the current pandemic and economic downturn. Notwithstanding significant challenges, it is easier, quicker, and less expensive than ever to access essential government services, job opportunities, news, and information thanks to the modern internet. Proposals to gut Section 230 would curtail future innovation and growth, causing American tech companies to lose the advantage they have had against the rest of the world for decades. That will hurt American companies, consumers, and taxpayers alike.