
By Surina Naran (’27) and Aadi Kucheria (’28)
In the wake of January 6th, many wondered what the lasting repercussions would be. Would it be the end of democracy itself? Was it a precursor to the crumbling institutions of our republic? Years later, while many of those questions still remain, one detail of that assault has created a clear ripple effect that may change the American landscape for years to come: the banning of Donald Trump from social media platforms. In response to his inflammatory remarks regarding the riot, Twitter announced that it was permanently banning Donald Trump from its platform to prevent “the risk of future incitement of violence” (Allyn, Keith). Instagram and Facebook soon followed suit. Though we didn’t know it at the time, the de-platforming of the former president would spark a movement in right-wing politics that may change social media as we know it.
Section 230 of the Communications Decency Act of 1996 permits online platforms to moderate content in good faith and protects them from being held liable for the content users post on their platforms (Cox, Wyden). Over the last few years, conservative politicians and lawmakers have argued that social media platforms (SMPs) are abusing the protections of Section 230 by purposefully silencing conservative voices and opinions online (Rucker, Goodman). Trump himself tweeted “REPEAL SECTION 230!!!” and issued the “Preventing Online Censorship” Executive Order to limit content moderation. Overall, conservative actors call for major reform to Section 230, alleging that it allows SMPs to unfairly censor conservative opinions and violates their free speech. These efforts are actively playing out in Congress, with dozens of pending bills aimed at reforming or repealing Section 230 and multiple states proposing similar legislation (Draper). However, the claims that content moderation violates free speech and therefore should be restricted are misguided and misinformed. Since Section 230 grants legal protections to SMPs for content moderation and the First Amendment does not obligate private entities like SMPs to host all forms of speech, good faith content moderation is legally protected and does not violate free speech. Therefore, right-wing attempts to restrict such moderation in the name of free speech, such as Texas House Bill 20, are baseless and blatantly defy established legal precedents.
Good Faith Content Moderation: Rhetoric, Protections, and Applications
The Good Samaritan clause of Section 230, or Section 230(c), uses moral and ethical rhetorical appeals to encourage online platforms to act in the best interest of their communities by moderating harmful content. More specifically, it frames content moderation as a virtuous act and encourages platforms to take a role in creating safe online spaces. For example, the use of the phrase “good faith” when describing necessary actions to restrict harmful content sets a standard of integrity for online platforms and encourages them to act reasonably and without malintent. These emphases on character and accountability create a sense of trust in online platforms to protect users from harmful content in a responsible manner and therefore serve to justify the protections against liability that Section 230 grants SMPs.
The intention of this rhetoric is legally backed up by Section 230(c)(2)(A), which protects most moderation choices made by SMPs from legal liability. The clause states that “no provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” (Cox, Wyden). In other words, not only does Section 230 encourage SMPs to practice good faith content moderation, but it also provides complete legal protection for said moderation provided it is done in good faith to protect users from harm.
While Section 230 does legally provide these protections, some conservative critics argue that the law’s broad scope enables platforms to censor speech under the guise of good faith moderation. This precedent was set in Zeran v. America Online (1997), which decided whether an online platform could be held liable as the distributor of third-party content, as distributors are liable for knowingly distributing illegal content. This case set a significant precedent, and in the decades since, courts have followed in the Fourth Circuit’s footsteps by interpreting Section 230 as a broad liability shield for internet service providers (Johnson, Castro). As conservatives argue, this may prevent SMP bad actors from being held liable for oversteps. For example, in a recent lawsuit against Snapchat, A Texas teen alleged that the app’s temporary photo and video feature created an opportunity for his teacher to send him sexually explicit content (Doe v. Snap, Inc). In line with the broad interpretation set by Zeran v. America Online, the Fifth Circuit dismissed the case, allowing Snapchat to evade legal consequences for a crime in which some would argue they had partial responsibility for. Opponents of content moderation argue that this case, and many others,exemplify the danger of the broad scope of Section 230.
However, there are a few situations in which Section 230’s scope may have protected against liability for harmful behavior. Some claim that this broadness allows SMPs to rampantly abuse the good faith clause; however, this overlooks the numerous exceptions to the liability shield the courts have laid out. Besides the fact that the authors of the law, Wyden and Cox, recently stated that the broad interpretations of Section 230 do not go against their original intent (Johnson, Castro), there have been multiple legal cases that have narrowed the scope of Section 230 to prevent it from serving as a complete shield for bad actors. For example, Fair Housing Council of San Fernando Valley v. Roommates.com (2008) ruled that Section 230(c)(1) does not apply if the accused platform induced illegal content. Other cases created additional exceptions to Section 230, such as protections not applying if “the defendant encouraged the development of the illegal content, or if the plaintiff’s claim does not arise from the defendant’s publishing or content moderation decisions” (Johnson, Castro). These exceptions give the courts the ability to limit the scope of Section 230 in cases of moderation done in bad faith. Overall, although there is some validity to the argument that Section 230 is too broad and allows for illegal content moderation, the bottom line is that in its current state, Section 230 protects all content moderation by SMPs as long as the decisions are made in good faith to protect users from harmful content.
Texas House Bill 20: In Defiance of Precedent and Congressional Intent
One of the main examples of these legislative efforts is Texas House Bill 20, which was proposed by Texas Republicans and prohibits social media companies from censoring user content based on the viewpoint of the user. It also allows a Texas resident to sue the platform if they believe they have been “wrongfully censored” due to their political ideology. After Governor Abbott signed the bill into law in 2021, NetChoice, a group representing Google, Facebook, and other SMPs, sued the Attorney General of Texas in federal court, alleging that the law violates their First Amendment right to exercise editorial discretion (Limbrick). Through a series of appeals and blockings of the law, the Fifth U.S. Circuit of Court Appeals voted 2-1 to uphold it. Although the law was eventually vacated by the Supreme Court in 2022 and sent back to the Fifth Circuit in 2024, the Fifth Circuit’s flawed ruling on NetChoice v. Paxton highlights a broader issue: conservative efforts to limit content moderation rely on arguments that disregard legal precedent and congressional intent.
First Amendment Protections for Private Entities
In addition to questioning Section 230 protections, many conservatives contend that content moderation by private platforms inherently constitutes a violation of free speech rights under the First Amendment (Rucker, Goodman). These claims misinterpret the First Amendment, as it does not obligate private entities, like social media platforms, to host all forms of speech. The First Amendment states that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech” (U.S. Const., amend. 1). This clause protects individuals’ right to speak and publish from government censorship and its attempts to suppress ideas, information, or content. This applies to public institutions like Congress, state and local governments, and government agencies and officials. However, it does not apply to private individuals or businesses, meaning that private entities are not required to uphold First Amendment protections and can impose stricter limitations on speech and expression. As stated by the Supreme Court in Manhattan Community Access Corp. v. Halleck (2019), “The Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech” (Manhattan Community Access Corp v. Halleck). Therefore, because social media platforms are private, not public, entities, the First Amendment’s Free Speech Clause does not prevent them from regulating content on their platforms that they deem inappropriate.
Nevertheless, many conservative lawmakers in favor of reform argue that content moderation should be deemed illegal under the First Amendment. They claim that SMPs fall under the public function exception of the “State Action Doctrine”, which states that a private entity will be considered a state actor if it engages in activity that is usually exclusively reserved for the state (Patty, Limbrick). This argument relies on the notion that SMPs are currently vital to society and fulfill a variety of public needs, and therefore they function as public forums. With this framework, SMPs would be state actors under the public function exception and content moderation would be a violation of the Free Speech Clause of the First Amendment. However, Supreme Court precedent has applied the public function exception very narrowly, with Justice Kavanaugh stressing in Manhattan Community Access Corp. v. Halleck that very few functions fall into that category and that “it is not enough that the function serves the public good or the public interest in some way” (Manhattan Community Access Corp v. Halleck). Moreover, the Supreme Court ruled in Hudgens v. NLRB (1976) that providing a forum for speech is not solely an activity performed by public entities, meaning that a private entity that provides a speech forum cannot be classified as a state actor based on that fact alone. In summary, despite claims that content moderation by SMPs is a violation of the First Amendment under the public function exception, Supreme Court precedent has defined SMPs as private entities whose actions are not restricted by the Free Speech Clause of the First Amendment. Therefore, attempts by right-wing actors to portray content moderation as a violation of the First Amendment are partially unsupported.
Invalidity of Conservative Claims
Right-wing actors have grounded their attempts to legally limit content moderation by social media platforms in two central claims: content moderation by SMPs is not or should not be protected by Section 230, and content moderation by SMPs is a violation of the First Amendment and its free speech protections. For example, Florida Law S.B. 7072 attempted to restrict SMP content moderation by arguing that SMPs play a significant role in preserving the First Amendment rights of Floridians and, therefore, SMPs should not be allowed to censor their speech (Limbrick). However, this case and others that use this rationale are invalid because Section 230 of the Communications Decency Act protects content moderation by online platforms as long as it is in good faith, and First Amendment protections against censorship do not apply to SMPs because they are not government actors. These facts clearly undermine and invalidate the central claims of anti-content moderation conservatives. kThe 11th Circuit’s ruling on Florida Law S.B. 7072, which struck down the law because its restriction of content moderation violated the rights granted to SMPs by the First Amendment and Section 230 supports this argument (Limbrick).
Looking Forward: The Implications of Content Moderation Restrictions
Conservative efforts to limit content moderation in the name of protecting free speech are invalid because Section 230 grants legal protection to SMPs for good faith content moderation, and the First Amendment does not require private entities to host all forms of speech. Whether or not this reasoning is valid, it is important to look beyond the details and recognize how impactful restrictions on the protections of Section 230 could be on freedom of speech and the functionality of social media platforms as a whole. Laws like Texas House Bill 20 and Florida S.B. 7072, if implemented, would allow states to compel any speech they want on private entities, giving government actors too much control over content on private platforms and undermining the effectiveness of the First Amendment. While discussing Texas House Bill 20 in her paper, “Hate Speech, Insurrections, and Fake News,” Limbrick goes as far as to say that “adhering to the Fifth Circuit’s opinion would drastically alter the fate of the internet, and potentially, doom social media as we know it today” (Limbrick, 1170). So, as political motivation to reform Section 230 grows on both sides of the aisle and the courts continue to navigate this issue, lawmakers must act to uphold the legal protections that allow private entities to responsibly moderate content and resist the pressure to utilize the right to free speech as a weapon to obtain narrow political goals. By finding a balance between protecting users from harmful content and maintaining a diverse and open online environment, lawmakers and officials have the opportunity and responsibility to ensure the integrity of the online world and the adherence to the right to free speech. Once that balance is achieved, only then can the debate around social media, censorship, and freedom of speech come to an end.
Leave a comment