Congress Regulating Content Moderation Could Discourage Competition

The House Energy & Commerce Committee hosted the latest hearing to feature the CEOs of large social media platforms as witnesses - in this case, Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, and Twitter’s Jack Dorsey. The hearing, titled “Disinformation Nation,” largely consisted of both Republican and Democrat representatives castigating the CEOs for content moderation policies that either infringed too much on free speech or infringed too little on speech they didn’t like. The only thing lawmakers of both parties all agreed upon was that these companies should be regulated, including perhaps taking away, in whole or in part, Section 230 of the Communications Decency Act (CDA). 

Lawmakers, particularly conservatives, who already host suspicions about the size of Big Tech and potential anti-competitive practices should embrace market competition (rather than “breaking up Big Tech”) as the best way to keep the online giants honest. This means that they should be wary of unintentionally creating new, artificial barriers to entry that the existing tech giants can use to fend off the next generation of rival platforms. Yet, proposals to amend Section 230 may accomplish just that. 

It is important to remember that CDA 230’s liability protections, in essence, were created to serve as a pro-competitive, pro-growth measure in the early days of the internet. Without the ability to freely police legal-but-harmful content, such as graphic violence, pornography, and racist speech, open online communities such as Facebook, Twitter, and Youtube would be terrifically unpleasant for most to use and never could have grown a user base like they have today. And without secure liability protections that allow these companies to determine where to draw the line on content, many online platforms would likely feel obliged to dramatically limit their users’ speech out of self-preservation.  

Yet lawmakers on the left threaten to limit or remove these liability protections because they don’t think platforms remove enough harmful speech, whereas lawmakers on the right believe the companies are harming free speech by engaging in ideologically biased moderation. Both sides, in some instances, raise valid points. Content moderation is one of the most difficult challenges posed by platforms connecting massive quantities of people with diverse values and standards of behavior. No matter how much time, money, and personnel these companies invest in trying to strike a balance between the two, not everyone will agree. 

Given that the common denominator across the ideological spectrum appears to be that Big Tech is getting content moderation wrong, it shouldn’t be a surprise that a company like Facebook is willing to have some of the decision-making taken out of its hands, as CEO Mark Zuckerberg once again proposed in his testimony. 

Whereas taking a hacksaw to CDA 230’s liability protections would be ruinous to online platforms large and small, new content moderation regulation runs the risk of benefiting existing large platforms the most. Not only will having the government or some designated third party design a set of standards allow companies to blame the government when some users are inevitably upset by the balance that is struck, the burdens of compliance will be far easier for the Facebooks and Googles of the world to shoulder. 

Even if, as Zuckerberg suggests, the most onerous liability and transparency requirements should be scaled to fall more heavily upon larger companies, how “large” is defined (revenue? market capitalization? number of users?) sets a new barrier for any new challenger to surmount in order to be able to compete. Creating a size barrier beyond which a growing social media challenger suddenly has to build an army of lawyers and support staff in order to maintain government compliance and fend off lawsuits is likely to reduce competition, not increase it. 

The social media market is full of smaller fish nibbling away at Facebook, Twitter, and Youtube even as the latter also compete with each other. Techdirt’s Mike Masnick summarizes the competitive status quo nicely: “If you don't clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge.” 

All three CEOs summoned to Thursday’s hearing made sure to highlight the various different ways they have each attempted to get content moderation right, and to the extent that they don’t, it creates the space for new upstarts who do a better job to eventually replace them. As Twitter CEO Jack Dorsey correctly expressed in his opening comments today, deciding exactly how and to what extent a social media platform should police its users and their posted content is ultimately “a business decision,” and “forcing every business to behave the same reduces innovation and individual choice, and diminishes free marketplace ideals.” 

It is easy to forget that social media as a concept is still very young, and consumers and companies alike are often figuring out what works on the fly. The liability protections provided by CDA 230 are not perfect or inviolable, but they do create a good framework under which the search for the right answers to content moderation can occur. Lawmakers should take care that in meddling with that framework, they don’t end up erecting barriers in the way of innovation.