March 28, 2021
Disinformation is a tried-and-true weapon used against democratic elections across the globe for hundreds of years because of its effectiveness at both small and large scales. Today, malicious influencers targeting Americans can operate confidently knowing that social media platforms are struggling to navigate an uncertain regulatory environment and the technical challenges of moderating content at-scale. The changes that may (or may not) be coming to Section 230 of the Communications Decency Act will have an impact on how disinformation is used in elections.
Communications law may seem like an unusual topic for those focused on elections. Unfortunately, the prolific use of social media platforms makes those same platforms vulnerable to exploitation by domestic and foreign malicious actors. All major platforms have published community guidelines on acceptable behavior by users. These guidelines are often thousands of words in length and go into great detail about specific topics discussed, types of media distributed, and even the offline affiliations of users. Some platforms have established trust and safety teams or oversight boards to manage these rules and decisions about what ultimately stays or is removed on the platform. Why would platforms invest so much time, effort, and expense to maintain detailed rules and complicated adjunction processes? The answer is in the protections offered by Section 230.
Section 230 is a surprisingly simple statute (only 26 words in length) that allows websites to make rules for what content users can post or cannot post without the website being held legally liable for the content. The law acts as a shield for online intermediaries such as websites, social media platforms, and other interactive computer service providers. These providers are protected when they decide to remove objectionable or illegal content. The free speech ideals embedded in the law have allowed social media platforms to publish the content of billions of users without having to approve each individual word, photo, or video.
The political debates amongst users on platforms (stoked by malicious disinformation campaigns) in the 2020 election cycle have pushed Section 230 reform onto the Congressional agenda. Members across the political spectrum grew concerned by both the action and inaction of some platforms to remove content and users that seemingly violated the platforms own community guidelines. This has led to a scenario in which there is no bill or even policy direction that is viewed as a front-runner in the policy discussion. It appears as if the only thing that everyone can agree on is that there will be at least some kind of regulatory change.
While Congress is engaged in the policy debate about content moderation, malicious actors will continue to take advantage of the existing Section 230 protection regime to spread misinformation. Journalists, election officials, and voters have a number of tools available to them to help validate the information and sources that they encounter online. The USC Election Cybersecurity Initiative and RAND Truth Decay project provide a robust list of those tools including: FactCheck (monitoring claims of major political players), Hamilton 2.0 (real-time information on foreign disinformation), and Rumor Control (countering election-specific claims). Ultimately, the best solution to stop disinformation is to prevent it from spreading once it has been identified. Every major social media platform has an in-line mechanism for users to report specific posts or media that may violate community guidelines.
All of the evidence points to malicious actors strengthening their capabilities to use disinformation and misinformation to undermine elections. Websites and social media platforms remain vulnerable to this kind of manipulation due to intermediary protections afforded by Section 230. Efforts to enact some kind of reform to the these protections are being hotly debated in Congress and state houses across the country. At the same time, the service providers are more aggressively regulating themselves by modifying community guidelines and content moderation procedures to address concerns that the rules about user behavior are not being applied equally or consistently – especially when it comes to political speech.


Maurice Turner
Election Security Analyst, USC Election Cybersecurity Initiative

Maurice Turner is a recognized technologist and cybersecurity expert who regularly provides analysis for television, print, and social media on issues relating to election security and election administration. He has held numerous positions in the public, private, and non-profit sectors, including the United States Election Assistance Commission (EAC), the Center for Democracy & Technology (CDT), and the United States Senate.