How Section 230’s Liability Shield Enables Commerce Online

(pdf)

One of the more frustrating aspects of the public policy debate surrounding liability protections for online platforms has been that reform proposals often appear to completely ignore how these liability protections work in practice and what they protect. Section 230 of the Communications Decency Act is frequently assigned the blame for online platforms being allowed to bias their moderation against certain political views (mostly by the Right), and for the same platforms being allowed to host hateful or inflammatory content that ought to be taken down (mostly by the Left).

Conservative critics of Section 230 are not wrong to worry about the implications of how large internet platforms police speech and membership within their massive communities of users across the globe. However, the freedom of association that private entities such as social media platforms enjoy with respect to what users and content they will host is ultimately guaranteed not by Section 230, but by the First Amendment of the U.S. Constitution. Although the First Amendment is perhaps most often thought of in terms of protecting freedom of individual expression, its protections of a private actor’s freedom of association are equally important. The ability of a business to curate a brand and audience via choosing what products and speech to allow on their premises provides a real and demonstrable economic benefit for online platforms and their users.[1]

As Stanford Law Professor Eric Goldman has laid out in detail, what Section 230 provides is not a substitute for these First Amendment protections, but rather an enhancement of them, by allowing the early dismissal of legal challenges to content moderation decisions that are clearly constitutionally protected.[2] And while the largest internet platforms like Google and Facebook  certainly could never have achieved their present success without these liability protections, they are hardly the only, or perhaps even the primary, beneficiaries. Small companies arguably benefit to an even greater degree since Section 230 can both help them avoid lengthy litigation that would ultimately vindicate them on First Amendment grounds, while it also provides a shield that enables large platforms to host third-party sales, ads, and reviews.

Lawmakers who desire to tamper with these liability protections should do so with a full understanding of the positive role they play in the digital economy, which may be threatened by new regulations targeted at a relatively tiny subset of content moderation decisions that they deem undesirable.

Enhancing First Amendment Protections

The ability of private companies to choose what content or products they will or will not host has been well established as a First Amendment protected right.[3] Importantly, this right of free association has been held to be no different when applied to online platforms. To quote the late Justice Antonin Scalia, “whatever the challenges of applying the Constitution to ever-advancing technology, ‘the basic principles of freedom of speech and the press, like the First Amendment's command, do not vary’ when a new and different medium for communication appears.”[4] As Corbin Barthold and Berin Szóka lay out in detail arguing against state-level content moderation regulations, “The government can no more compel Twitter to explain or justify its decision-making about which content to carry than it could compel Fox News to explain why it books some guests and not others.”[5]

Some of Big Tech’s critics have attempted to bypass this First Amendment argument by declaring that social media platforms are akin to a public square due to their size and reach, but this argument, too, has been rebuffed by the courts. As Justice Kavanaugh wrote in 2019, “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”[6] A panel of the Ninth Circuit Court of Appeals specifically upheld the application of this principle to online platforms in their dismissal of Prager U v. Google just last year.[7]

It is important, too, to recall that one of the core reasons for Section 230’s creation in the first place was to encourage websites to both host and moderate third-party content on their sites without fear of reprisal.[8] The Stratton Oakmont v. Prodigy court decision in 1995 had set a new precedent that websites could be held liable for allowing “defamatory” posts on their sites,[9] which could have had a major chilling effect on the communities of forums, comment boards, and nascent blog hosting sites that were the heart of growing online communities. The original bill by then-Representatives Chris Cox (R-CA) and Ron Wyden (D-OR) that later became Section 230 was a direct response to this threat.

At the same time, concerns about the proliferation of pornography and other obscene, violent, or illicit content online caused an upswell of calls to ban such content, culminating in the passage of the Communications Decency Act (CDA). In recent years, now-Senator Wyden has described Section 230’s function as a “sword and shield”[10] - the “sword” being the part that allowed online services to remove content they deemed inappropriate. Meanwhile, the “shield” kept them from being liable for (non-illegal) content that online platforms host but do not create.[11]

As many authors have described at length, it is simply impossible to imagine the present internet economy, its forums for speech and commerce, existing in their present forms - certainly at their present scale - without these liability protections.

Section 230 as a Protection for Startups

As alluded to earlier, a large part of the economic value of Section 230 lies in the potential costs it saves online platforms by providing for a quick dismissal of legal challenges that would ultimately fail on First Amendment grounds anyway. Naturally, the companies that stand to lose the most from such legal battles are those companies with the least free capital sitting around - startups and small online vendors.

A recent report on tech startups by Engine calculated that the average small tech startup has about $55,000 per month in monthly cash from its seed capital.[12] Even getting a frivolous content moderation dispute thrown out via a successful early motion to dismiss, they found, can cost from $15,000 up to as high as $80,000; whereas defending a case through the discovery and trial process can quickly cost into the hundred of thousands of dollars.[13] The fact that moderation decisions are protected by the First Amendment in principle is little consolation if a company would have to go bankrupt to prove that point in court.

Startups in the digital economy face long odds to begin with, and removing the safe harbor against getting bankrupted by a content moderation lawsuit can discourage venture capital investment in future online platforms. A study authored by Techdirt’s Mike Masnick compared investment in internet platforms and found that the years immediately following Section 230 being passed and upheld in the courts saw a huge uptick in US venture capital investments in online platforms relative to the EU, which has somewhat less broad intermediary liability protections. In terms of total investment, their data suggests that in the US, “under the framework set forth by CDA 230, a company is 5 times as likely to secure investment over $10 million and nearly 10 times as likely to receive investments over $100 million, as compared to internet companies in the EU.”[14]

Defining “Big” - Who Gets Section 230 Protections and Who Doesn’t?

Many proposals to limit Section 230, then, try to keep it as a shield for smaller companies, while limiting it or even outright eliminating its protections for those above a given size. House Minority Leader Kevin McCarthy recently outlined the basics of a plan to limit Section 230’s protections for “Big Tech” and to mandate increased transparency and user rights of appeal to their content moderation decisions.[15]

Merely defining how such a limitation would apply quickly becomes problematic, as online platforms come in a bewildering variety of functions and forms. The way that content is hosted and moderated, and the effects that modifying Section 230 would have, vary widely between such diverse companies as Reddit, Yelp, Amazon, Twitter, YouTube, or eBay, just to name a few of the largest players.

It is possible for a company to amass an enormous user base, for example, while remaining a relatively tiny company in terms of employees, market cap, and overall resources. Sites like Wikipedia and Craigslist, for example, generate huge amounts of web traffic and yet are a fraction of the size of large social media or search companies. On the other hand, it is also possible for a company that has a massive market cap or annual revenue, but which does not host third-party content as its main cash cow, to get caught up in a Section 230 reform that may make it easier for the company to simply eliminate whatever content hosting it does have (comments and reviews being the most obvious example).

As the internet economy continues to grow, more and more companies would start to bump up against these arbitrarily defined numbers - whether defined by monthly users, revenue, market cap, or usage - always outpacing the government’s ability to adjust.

Large Platforms Are Big Business for Small Businesses

But even if a set of constraints on Section 230 were limited to just the few largest, most powerful “Big Tech” companies, the economic consequences would not merely fall upon those companies themselves. For example, without the shield of Section 230, platforms could have to defend themselves against every user review they hosted that someone challenged as defamatory. In such a situation, the cost of defending a legitimate negative review isn’t worth it for the company, so the incentive created is to simply take down any content that is challenged - a perverse incentive that is known as the “heckler’s veto.”[16]

The existence of large online retail platforms effectively reduces barriers to entry for small market entrants who can take advantage of the scale of their existing audience.[17] Third-party sellers on Amazon, for example, accounted for 55 percent of units sold on the platform in 2021,[18] while 73 percent of these third-party sellers employed just 1 to 5 people as of 2018.[19]

In practice, making platforms liable for third-party user reviews would threaten the entire ecosystem of small sellers who have used platforms such as Amazon, eBay, Etsy, or many others to connect with customers across the world. A 2020 survey by the Internet Association confirmed that consumers were overwhelmingly more likely to purchase goods online for which reviews were available.[20] The ability to build an online “good credit” record via verified reviews on these online marketplaces is crucial to the ability of smaller businesses and individuals to attract new customers.

Similarly, the ability of businesses large and small to seek new customers through targeted ads on platforms like Youtube and Facebook grants them affordable marketing to new audiences that would be impossible on their own. These advertisements, too, are third-party content that are made safer for sites and platforms to host because they aren’t responsible for verifying the contents of every one.[21] Conversely, advertisers are often cautious about what sort of content their products are associated with, and a platform’s ability to monetize its hosted content can be compromised if it is unable to take down the sort of “lawful-but-awful” user-generated content that causes advertisers to flee.

Thus, the entire economy of small sellers who make a living via their ability to tap into the large incumbent platforms is endangered if those large platforms suddenly become liable for every decision they make as to who they do and do not choose to host.

Barriers to Competition

The difficulty inherent in scaling up a new social media platform, or a new online retailer, to the point of competing with the likes of Amazon or Facebook, for example, is steep, but that competition does exist and it’s fierce.[22] While weakening Section 230’s protections across the board would threaten startups and small online businesses in particular, setting some arbitrary threshold where liability protections become conditional or disappear could have the unintended effect of protecting large incumbent platforms from competition.  

It should be no surprise, then, that Facebook’s Mark Zuckerberg has ultimately joined in calls to reform Section 230 and establish federal guidelines for content moderation.[23] Not only do large incumbent companies have the resources to navigate whatever changes Congress makes, those same regulatory costs provide just one more hurdle to any rival taking their place.

Indeed the model of regulation that Facebook has endorsed would require platforms to “earn” Section 230 protections by demonstrating the capability to adequately police their own content. Building the software infrastructure to do that, especially the sort of machine learning algorithms necessary to rapidly take down offending content at a large scale, is a complicated and costly feat that large incumbent platforms like Facebook or YouTube have already gone through.

Similarly, the “Platform Accountability and Consumer Transparency (PACT) Act” (S. 797), a bipartisan Section 230 reform proposal, would require online platform that hits the milestone of 1 million unique monthly users and $50,000,000 in revenue to set up a complicated notice and appeal process and to comply with transparency reporting mandates.[24] The substantial compliance costs of these requirements would create another barrier between entrenched incumbent platforms and their competition, in addition to many other concerns about the workability of the bill.[25] Ironically, one of the unintended incentives this may create for a growing online platform that nears the covered platform threshold is to sell out to one of the large incumbents that already has this compliance infrastructure built.[26]

Section 230 Encourages Diversity in Moderation Approaches

Ultimately, some amount of content moderation is necessary in order to create an online forum that most average, decent people will want to use. But the larger the quantity of content hosted, the harder it is to moderate that content well, and the more mistakes will be made, regardless of whether moderation is handled by algorithms, humans or both.[27] By protecting a company’s ability to choose how it moderates content without fear of legal reprisal, Section 230 also encourages competition among different approaches to moderation.

If a platform chooses to only accommodate posts conforming to a given religious, political, or other viewpoint, it can do so. If it chooses to keep a hands-off approach that allows all of the most offensive and repugnant opinions humans may conceive of (the former 8chan, for example), a platform is free to create that community too (so long as the hosted content does not violate federal law).

Another approach is to attempt to satisfy platform users by outsourcing accountability, as Facebook has attempted via its Oversight Board. Whether this ostensibly neutral third-party overseer can provide a real check against arbitrary moderation remains to be seen. Twitter, meanwhile,has been dabbling with allowing users to essentially crowdsource attempts to combat “misinformation” via its Birdwatch program,[28] while various platforms allow degrees of user controlled content filtering which operate with varying degrees of success. 

Reforms to Section 230 that make its liability shield contingent upon defined moderation standards, and even mandates for greater transparency,[29] threaten to homogenize moderation practices when, instead, we should want to encourage sites to experiment with policing content in ways that build the online communities that work best for their users.

Conclusion

Section 230 is frequently represented by its detractors as a “giveaway” to tech firms, but this misrepresents its origins and ignores that the prime beneficiaries of its liability protections have been consumers. It allowed for the creation of an internet ecosystem that has generated so many colossally wealthy companies precisely because so many people have found it useful to interact on their platforms. Critics also frequently make the argument that the law is outdated, because so much has changed about the digital economy since 1996. Yet, much of what has changed was in fact enabled, even driven, by the proliferation of platforms hosting third-party content that this liability shield allowed.

The First Amendment enhancement that Section 230 provides is clearly not without trade-offs, like any policy choice. Where a Facebook, Youtube, Amazon, or an app store chooses to draw the line on what constitutes acceptable expression certainly does have implications for the public dialogue and the free dissemination of information. But it’s also crucial to remember that the sword and shield Section 230 provides not only defends the companies’ right to moderate content how they choose, but also their ability to host all of the billions of user interactions and transactions that they do allow. Interjecting the government too deeply into online content moderation is itself a threat to basic principles of free interaction in a private market.

Just as importantly, removing or weakening Section 230’s safe harbor protections, even just for the largest platforms, could have serious negative impacts on new investment in the next generation of online platforms, undermining US leadership in the digital economy.


[1] Wham, Ethan. “An Economic Case for Section 230.” Project DisCo, Sept. 6, 2019. https://www.project-disco.org/innovation/090619-an-economic-case-for-section-230/ 

[2] Goldman, Eric. “Why Section 230 is Better than the First Amendment.” Notre Dame Law Review, Vol. 95 No. 33, 2019. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3351323 

[3] See the Congressional Research Service, “Free Speech and the Regulation of Social Media Content,” Mar. 27, 2019. https://crsreports.congress.gov/product/pdf/R/R45650 As the report observes, “Government action regulating internet content would constitute state action that may implicate the First Amendment.”

[4] Brown, et al. v. Entertainment Merchants Assn. et al., 564 U.S. 786 (2011). Scalia is quoting from Joseph Burstyn, Inc. v. Wilson, 343 U. S. 495, 503 (1952). https://supreme.justia.com/cases/federal/us/564/786/ 

[5]  Barthold, Corbin and Szoka, Berin. “No, Florida Can’t Regulate Online Speech.” Lawfare Blog, Mar. 12, 2021 https://www.lawfareblog.com/no-florida-cant-regulate-online-speech 

[6] Manhattan Community Access Corp. v. Halleck, 587 U.S. ___ (2019). https://supreme.justia.com/cases/federal/us/587/17-1702/ 

[7] Prager University v. Google LLC, No. 18-15712 (9th Cir. 2020). https://law.justia.com/cases/federal/appellate-courts/ca9/18-15712/18-15712-2020-02-26.html 

[8] Kossoff, Jeff. The Twenty-Six Words that Created the Internet. Cornell University Press, 2019.

[9] Stratton Oakmont, Inc. v. Prodigy Services Company, May 24 1995. https://h2o.law.harvard.edu/cases/4540 

[10] Sen. Ron Wyden. “Floor Remarks: CDA 230 and SESTA,” Medium.com, Mar. 21, 2018. https://medium.com/@RonWyden/floor-remarks-cda-230-and-sesta-32355d669a6e 

[11] The relevant sections of the statute are 47 U.S. Code § 230 (c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

and (c)(2): “No provider or user of an interactive computer service shall be held liable on account of

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

https://www.law.cornell.edu/uscode/text/47/230 

 

[14] Masnick, Michael. “Don’t Shoot the Message Board: How Intermediary Liability Harms Online Investment and Innovation.” Copia and NetChoice, June, 2019.  https://netchoice.org/wp-content/uploads/Dont-Shoot-the-Message-Board-Clean-Copia.pdf 

[15] Rep. Kevin McCarthy. “Framework to Stop the Bias and Check Big Tech,” June 27, 2021. https://www.republicanleader.gov/framework-to-stop-the-bias-and-check-big-tech/ 

[16] Goldman, ibid

[17] Dippon, Christian M. “Economic Value of Internet Intermediaries and Liability Protections,” Internet Association, June 5, 2017.  https://internetassociation.org/wp-content/uploads/2017/06/Economic-Value-of-Internet-Intermediaries-the-Role-of-Liability-Protections.pdf 

[18] “Share of paid units sold by third-party sellers on Amazon platform as of 2nd quarter 2021.” Statista, Accessed May 10, 2021. https://www.statista.com/statistics/259782/third-party-seller-share-of-amazon-platform/ 

[19] “Number of employees employed by businesses selling on Amazon marketplace in 2018.” Statista, Accessed May 10, 2021. https://www.statista.com/statistics/886904/amazon-seller-business-size-by-employees/ 

[20] “Survey on Holiday Shopping and Online Reviews.” The Internet Association, Dec., 2020. https://internetassociation.org/wp-content/uploads/2020/12/IA_Survey-On-Holiday-Shopping.pdf 

[21] Gellis, Cathy. “How to Think About Online Ads and Section 230.” Techdirt, Feb. 10, 2021,  https://www.techdirt.com/articles/20210207/19262446203/how-to-think-about-online-ads-section-230.shtml 

[22] Stewart, Sean. “Nipping at Big Tech’s Heels: Competition in Social Media.” Competitive Enterprise Institute, Aug. 8, 2019; https://cei.org/blog/nipping-at-big-techs-heels-competition-in-social-media/ 

[23] United States Cong. House. Committee on Energy and Commerce Subcommittee on Consumer Protection & Commerce and Communications & Technology. "Hearing Before the United States House of Representatives." March 25, 2021. 117th Cong. 1st Session. (Testimony of Mark Zuckerberg, CEO, Facebook.)  https://docs.house.gov/meetings/IF/IF16/20210325/111407/HHRG-117-IF16-Wstate-ZuckerbergM-20210325-U1.pdf 

[24] Platform Accountability and Consumer Transparency Act, S. 797, 116th Cong. (2021). https://www.congress.gov/bill/117th-congress/senate-bill/797 

[25] Goldman, Eric. “Comments on the ‘Platform Accountability and Consumer Transparency Act,” Technology and Marketing Law Blog. July 27, 2020.  https://blog.ericgoldman.org/archives/2020/07/comments-on-the-platform-accountability-and-consumer-transparency-act-the-pact-act.htm 

[26] Goldman, Eric, and Miers, Jess. “Regulating Internet Services by Size.” CPI Antitrust Chronicle, Santa Clara Univ. Legal Studies Research Paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3863015 

[27] Masnick, Mike. “Masnick’s Impossibility Theorem: Content Moderation at Scale Is Impossible to Do Well.” Techdirt, Nov. 20, 2019. https://www.techdirt.com/articles/20191111/23032743367/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well.shtml 

[28] Coleman, Keith. “Introducing Birdwatch, a community-based approach to misinformation,” Twitter.com, Jan. 25, 2021. https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation 

[29] Masnick, Mike. “Transparency is Important; Mandated Transparency is Dangerous and Will Stifle Innovation and Competition.” Techdirt, Oct. 29, 2020. https://www.techdirt.com/articles/20201028/17461945607/transparency-is-important-mandated-transparency-is-dangerous-will-stifle-innovation-competition.shtml