Algorithmic Transparency for Search Engine Prioritization of Political Content: A Step in the Wrong Direction

 

Last month, Florida lawmakers introduced new legislation in the state’s House of Representatives and Senate focused on data privacy and certain transparency issues.1 However, the legislation’s proposed transparency requirements for search engines’ prioritization of “political partisanship” and “ideology” are uncharacteristic of any major U.S. privacy statute. The overly broad proposals—which appear less informed by empirical evidence than by a desire to partake in political targeting and cater to popular interests—should not be included in any well-designed privacy legislation.  

An ideal privacy law should create identical rules for companies of any sector dealing with the sensitive personal data of a large number of customers. Instead, Florida’s proposed privacy law would apply only to a handful of large companies that either manufacture smart devices or sell advertisements.2 As a result, the legislation’s potential to improve statewide data practices and consumer rights remains limited. 

While the stated purpose of the legislation is to protect consumer privacy, its true focus appears to be more politically motivated. Unlike other state privacy laws, the Florida legislation specifically targets search engines and proposes transparency requirements for prioritization of “political partisanship” and “ideology.” Both House and Senate versions of the proposed law stipulate that “[a] controller that operates a search engine shall provide a consumer with information of how the controller’s search engine algorithm prioritizes or de-prioritizes political partisanship or political ideology in its search results.”3 The choice of language—“how” a search engine prioritizes political content as opposed to “whether” it does so—presupposes that search engines prioritize content as a function of partisanship/and or ideology. 

As private entities, search engines should be free to prioritize or deprioritize “politically partisan” or “ideological” content, but empirical evidence does not appear to support assertions that they do. In a December 2019 article, researchers at Stanford’s Media Lab and School of Engineering found no evidence of political bias in their audit of search results for every candidate for federal office during the six months leading to the 2018 U.S. elections. Likewise, a statistical study conducted by The Economist in June 2019 found no evidence of ideological bias in Google search results, concluding that its search engine algorithms rewarded reputable reporting over, or rather than, left- or right-leaning news sources. 

To the extent that there is user-specific bias in search engine results, such biases often reflect a specific user’s search history, browsing patterns, and preferences. As a general example, users that often visit French or Spanish news sites would be more likely to be shown advertisements in those languages, compared to users who visit only English-language websites. Such mechanisms are typically designed to allow users to see content that she or he might find more interesting, relevant, or useful. In any event, such user-specific biases can be largely avoided by a combination of opting for privacy-oriented browsers, using search engines in incognito mode, and utilizing a virtual private network. 

Another potential reason for user-specific biases, as some computer scientists note, is that partisan differences in search terms lead to divergent search engine results. Even then, according to a recent study, Google search engine results have been shown to demonstrate a mainstreaming effect that partially neutralizes the effects of partisan differences in search terms—instead of augmenting those differences as would be the case with algorithms that seek to prioritize partisan content.

In any case, search engine algorithms require significantly complex calculations that keep changing and cannot always be explained clearly without affecting the quality and relevance of search results. That is especially the case with the rapid development of AI-enabled search engines. In developing AI systems, programmers often face a tradeoff between the “explainability” of AI algorithms and their underlying effectiveness. For example, medical researchers at New York’s Mount Sinai Hospital developed an AI system—trained on the medical data of 700,000 patients across several hundred variables—which could accurately provide medical diagnostics. However, due to the complexity of the algorithms used, its programmers could not accurately describe how the algorithm functioned. If lawmakers were to mandate the explainability of such algorithms, it could very well detract from their effectiveness. 

At a time when AI-enabled search engines are developing rapidly, mandating the explainability of search engine algorithms risks harming the quality and relevance of search results. If required at the federal level, such a policy could decelerate U.S. technological progress in developing the next generation of AI-enabled search engines—to the benefit of their global competitors. Instead of imposing politically motivated rules, lawmakers should instead design well-calibrated, evidence-based privacy rules that reflect concerns to protect privacy, reduce trade barriers, and promote technological innovation.  

Notes

1.  Despite some differences, the Senate and the House versions of the legislation propose broadly similar rules to protect consumer privacy as well as certain measures to limit the government’s role in social media content moderation and improve the transparency of search engine results related to “politically partisan” or “ideological” content. Florida House of Representatives, H.B. 1547 (2023). Retrieved from: https://ww w.flsenate.gov/Session/Bill/2023/1547/Bill. Florida Senate, S.B. 262 (2023). Retrieved from: https://flsenate.gov/Session/Bill/2023/262.

2.  More specifically, the proposed law would only apply to businesses with revenue of more than $1 billion that either 1) manufacture smart devices or 2) receive at least 50 percent of annual revenue from selling online- or targeted advertisements. H.B. 1547 § 501.173 (2) (e). S.B. 262 § 501.173 (2) (e).

3.  H.B. 1547 § 112.23 (3). S.B. 262 § 112.23 (3).