House Republicans Miss the Mark on Content Moderation

(pdf)

Amid heightened concerns about how the largest tech companies are policing content and removing accounts from their platforms, the variety of Republican-led legislative remedies at both the state and federal levels continues to expand. To this end, the Republicans of the House Energy & Commerce Committee have put forth a memo containing an a la carte menu of proposed options for regulating content moderation online. 

The memo contains a number of potential legislative actions purporting to bring about “Big Tech Accountability,” including: new regulations for large companies, removal of liability protections in Section 230 of the Communications Decency Act, empowering the Federal Trade Commission to enforce certain content management practices, requiring new reporting on social media impacts on children, and granting law enforcement new powers. 

To the credit of the memo’s authors, they begin by acknowledging the difficulty of squaring government regulation of these platforms with bedrock conservative principles of private property and free enterprise, and constitutional freedoms of speech and association. Unfortunately, many of the actual legislative suggestions that follow fail to adhere to these principles. Moreover, it’s not at all clear that they will bring about positive change in the area most important to many conservatives: perceived bias on the part of tech companies against conservative viewpoints.

Many of the “legislative concepts” the memo lays out overlap in some respects, and there are a few themes between them that are concerning:

Tiered Regulations May Impede Competition

The first proposal offered is to restrict the scope of any of the memo’s other regulations to only “Big Tech companies with an annual revenue of $1 billion.” On one hand, this respects smaller companies by not saddling them with potentially crushing compliance costs; an important consideration for any changes to regulation in digital markets. On the other hand, lawmakers should consider that whatever threshold is chosen also sets a de facto barrier to competition from any firm which might otherwise challenge the dominance of the current leaders in Big Tech.

The reach of these proposals is also not particularly limited by the memo’s language purporting to apply them “only” to social media companies, app stores, and “other tech companies engaged in certain activities.” This expansive definition includes virtually any company operating online or in the tech space.

Ironically, regulating the larger firms differently might even restrict or eliminate services that smaller competitors use to their benefit. For one example, completely removing Section 230 liability protections for tech companies above this size threshold could very well make it not worth the liability risk for them to host content such as third-party reviews. This would place smaller sellers on platforms such as eBay, Amazon, or in the app stores at a huge disadvantage, as they rely on positive user feedback to build a reputation and customer base. 

First Amendment Concerns

The memo’s introduction explicitly states the authors would not approve of an internet “Fairness Doctrine,” referring to the now-defunct Federal Communications Commission policy requiring that broadcast news present controversial issues in a “balanced” manner. The Doctrine effectively empowered federal regulators to uphold so-called viewpoint neutrality for holders of broadcast licenses, and arguably restricted now-popular content like conservative talk radio and news outlets. The end of enforcement of the Fairness Doctrine and its eventual repeal removed bureaucrats from the role of determining what content was allowed to be delivered to consumers.

Unfortunately, the E&C memo pivots from disavowal of the Fairness Doctrine to effective embrace of its core function of empowering the federal government to dictate allowable content. By suggesting that liability protections should be conditioned on moderators being neutral with respect to “political affiliation or viewpoint,” the memo has more or less endorsed the Fairness Doctrine concept by another name. Viewpoint neutrality is inherently subjective, meaning that practically any decision to flag or take down content can be subject to dispute. Involving the government in arbitrating what constitutes ideological “discrimination” invites the same sort of politically motivated interference in content moderation that the original Fairness Doctrine enabled in broadcast media. 

The incentive of companies in response to a neutrality mandate might actually be to moderate far less. While this could mitigate concerns about platforms removing conservative content, it could also drastically increase the amount of highly undesirable content that is allowed to enter our social media feeds. Even worse would be limiting liability to “moderation of speech that is not protected by the First Amendment or specifically listed in the statute.”

There is an enormous amount of expression that is protected by the First Amendment but that users would happily have social media platforms remove before it appears on their news feed. A lighter moderating touch could preserve more conservative content, but that might come at the cost of much more prevalent posting of pornography, fraudulent content, or even terrorist recruitment.

As Professor Eric Goldman observes, one of Section 230’s virtues is that it in many ways serves as an enhancement of the First Amendment by not allowing every dispute over content moderation to turn into a new, expensive, and potentially standard-altering First Amendment case. Indeed, Supreme Court precedent has made abundantly clear that the right of private companies to determine what content appears on their publication or platform is itself a form of First Amendment protected speech. 

Nor is it valid to conceive of social media as some sort of “public forum” in which third-party speech is subject to First Amendment protection. As Justice Kavanaugh has written, “when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. … In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”

Handing Content Moderation Standards to the FTC

The E&C’s transparency policy concept would place the determination as to whether companies have adequate “processes for making content decisions” into the hands of the Federal Trade Commission (FTC). This presumably only makes sense in a scenario under which the FTC is empowered to define “reasonable moderation practices” as contemplated in another of the memo’s proposals.

Transparency and clarity of content moderation standards are almost certainly a better way to get at these concerns about censorship and bias than tinkering with liability protections, but there are a number of ways that government mandates to that end can go awry.

Obviously, how the FTC would define a reasonable moderation practice matters a great deal, because content moderation is always subjective to some extent. And, unfortunately, the variety of creatively offensive, distasteful, or abusive content that people generate on the internet is only as finite as the imagination of its billions of users. Given the amount of grey area involved, how might “reasonableness” be interpreted by an FTC led by, for example, Acting Chair Rebecca Slaughter, who has already expressed her desire to use FTC antitrust enforcement for social engineering purposes?

If, as in the memo’s fifth legislative concept, companies were required to be able to cite the “specific provision(s) the content or user violated and why” for each challenge to their decisions, their incentive would once again likely be either to drastically over-moderate (e.g. “no content allowed that remotely touches X categories”) or to drastically scale back moderation in a way that makes their communities unpleasant for many to use. It is simply impossible to define a standard that accounts for every specific instance and edge case, regardless of whether the decisions are being made by human moderators or by algorithms.

It is worth noting that the social media giants have already been exploring various options to address questions of moderation bias and rights of appeal on their own, whether in the form of Facebook’s independent oversight board or Twitter’s goals to decentralize more of the users’ control of their experience. Content moderation at the scale of these platforms with global reach is one of the most difficult, and perhaps unsolvable, problems of the digital economy, and giving a body like the FTC too much control in deciding where the lines should be drawn risks calcifying the industry around whatever practices are safest from enforcers, rather than what most of the actual users of these platforms think is working best for them.

Redundancy With Existing Law

A number of the memo’s proposals touch upon the hosting of illegal content and cooperation with law enforcement. Online platforms are already obligated to take down and report any illegal content they become aware of to law enforcement, and to cooperate with investigations. Some additional reporting or transparency requirements regarding illegal content may be warranted in some circumstances, but Section 230, for example, already contains a blanket carve-out from liability protection with respect to any content that is illegal under federal law.

While the law enforcement section of the E&C memo is short and thus does not contain much detail, it is also worth considering whether added law enforcement powers might trigger genuine privacy concerns for users. As an example, recent debates about encryption deal with the inherent tug of war between absolute privacy of user data and communications from the prying eyes of government and concerns about maintaining public safety and improving the ability to enforce the law. New requirements to cooperate with law enforcement might add a new front to an already simmering battle.

Conclusion

Transparency may be the area where light-touch changes in policy could be most promising. However, accounting for the subjectivity and error inherent in maintaining any set of content standards is difficult, which means that even modest changes to rules can create a regulatory morass unforeseen by policymakers.

Taken as a whole, however, the majority of the proposals in the E&C Republicans’ memo run counter to one or several of the categories that its authors desire to protect: free speech, small businesses and entrepreneurship, and promoting American tech leadership and innovation. Account removal and content policing decisions by major tech companies are a genuine matter of concern to many conservatives, but a heavy-handed government intervention threatens to tear at the very fabric of how people can share content with one another online. Enforcing “neutrality” in moderation, exposing platforms to liability for their choices in what content to allow, or treating them like public property all violate basic notions of private property and freedom of association.