Proposals to try to address bias in moderation on social media have popped up in a multitude of state legislatures in 2021, largely introduced by Republicans as a reaction to Twitter and Facebook deplatforming then-President Trump and a number of his high-profile defenders in January. Unfortunately, some of these knee-jerk proposals may become law in spite of serious problems they raise regarding constitutionality and the workability of state-level internet regulation.
Many state social media regulation proposals share a common denominator of trying to ban large internet platforms from removing users or their posts from their site on the basis of biased, political criteria. Although the reasoning behind this is understandable, trying to accomplish this via legislation quickly becomes problematic. Take for example Utah’s “Freedom from Biased Moderation Act,” SB 228, which is likely to become the first such bill to be signed into law after passing both state legislative chambers last week.
SB 228 takes quite a blunt approach, declaring that a “social media corporation may not employ inequitable moderation practices,” compared against their own terms of service. In addition, a more novel section of the bill requires that these companies arbitrate disputes about moderation via “an independent review board” that “shall consist of at least 11 members who represent a diverse cross-section of political, religious, racial, generational, and social perspectives.” If found to have improperly taken down content, the companies could be penalized by the state for up to $1000 per affected user plus damages.
As TechDirt’s Mike Masnick has pointed out in a rather scathing takedown, this Utah proposal, and others like it, would likely run face-first into First Amendment barriers against compelled speech. Courts have repeatedly upheld private companies’ right to free association in moderating speech on their own platforms against a variety of challenges.
This is an argument that has frequently been lost in the tussle at the federal level over Section 230 and platform liability protections - the vast majority of lawsuits filed against biased content moderation would be rejected or lost on First Amendment grounds. Section 230 protections for platforms merely allow a more expedient resolution of a claim that would fail a First Amendment challenge. State laws that infringe on this freedom of association with respect to political expression would almost certainly (one would hope) be struck down by courts.
This point certainly does not appear to be lost upon the authors of Texas’ similar legislation, SB 12, which Governor Greg Abbott recently called for his state’s legislature to pass. Section 3 of this legislation is a quite lengthy severability clause that declares itself necessary “because this Act has been enacted amid uncertainty about the application of the United States Constitution and relevant federal statutes.”
The Texas bill’s author clearly must understand that its broad prohibition stating that “an interactive computer service may not censor a user… based upon the viewpoint of the user or another person” is a flagrant violation of these platforms’ right to disassociate from speech they consider objectionable.
The Utah legislation, in a similar vein to that proposed by Florida’s Governor DeSantis, does make some transparency requirements of the tech giants that might sound attractive in theory. However, it is not as simple in practice for companies to “clearly communicate” their moderation practices, especially if that is defined, as in SB 228, as impossibly as providing “a complete list of potential moderation practices to all account holders.” Determining what speech may constitute a threat, bullying, harassment, or incitement is inherently contextual and subjective; to list out every specific instance of what might qualify in advance is impossible.
Finally, these state attempts to regulate social media would cause similar problems to other state-by-state regulation of internet activity, threatening to create a patchwork of laws that is functionally difficult or impossible for companies to navigate. Imagine, just for one example, if a number of states all passed legislation following Utah’s model, each mandating an “independent review board” to arbitrate content moderation disputes. Would each state have its own review board, with its own subjective set of standards for what constitutes a sufficiently diverse array of opinions as to constitute a “neutral” body?
What if some states mandate that social media platforms increase policing of “hate speech,” however that is defined, whereas others attempt to ban the removal of “political speech,” however that is defined, from platforms entirely?
Given that social media users interact without regard for lines on a map, one likely response by the platforms might be a hands-off approach to moderation that would preserve all manner of speech that most users might well have enjoyed not seeing, from the violent to the obscene. As much as private companies restricting political speech is a real concern, the freedom of these platforms to quickly pull down genuinely offensive content is part of what makes them pleasant for most to use.
As online platforms and their users struggle to decide on a balance between what constitutes proper moderation versus unfair censorship, trying to etch that boundary into law at the state level will prove unworkable and, in many cases, unconstitutional.