A 229-Year-Old Handbook for Social Media Companies Commentary
A 229-Year-Old Handbook for Social Media Companies

Social media platforms function as marketplaces of ideas in which the democratic tenets of participation, liberty, and community-building are designed to thrive. The recent onslaught of objectively false information about the COVID-19 pandemic, election security concerns, and the reignited fervor of the Black Lives Matter movement, however, has awakened the public to ways in which American democratic values of open communication and participation are being exploited through the sharing of disinformation and harmful content. 

The public and corporate leaders have called upon companies such as Twitter, Facebook, and YouTube to engage in more proactive and stringent measures to address and regulate false, hateful, and otherwise problematic content on the platforms. It is the age-old tale of the social contract in which we must sacrifice certain freedoms for other protections, but in 2020, it is social media companies rather than government actors with whom we craft this pact. 

Online platforms have been forced to confront the longstanding debate about free speech in the cyber realm and reassess their role as an online “arbiter of truth.” The resurgence of this debate has engendered more questions than answers, the most plaguing being: How can social media platforms simultaneously safeguard freedom of expression while encouraging tolerance and stemming harmful content on the online platforms?

In modern times, online venues dominate the sharing of speech; the private cyber realm has superseded traditional public forums as the main hub of communication. In Packingham v. North Carolina (2017), the Supreme Court affirmed that “the most important places for the exchange of views today…is cyberspace—the ‘vast democratic forums of the internet’ in general, and social media in particular.” Private actors have broad control over the majority of speech that citizens share, which, according to David L. Hudson, makes them “analogous to a governmental actor.” As public demands for online speech restrictions grow, corporations are entering uncharted territory, tasked with writing the rules of internet censorship. 

As private actors, social media companies are legally able to regulate and restrict content on their online platforms. The state action doctrine grants social media companies immunity to the protections of the First Amendment, which guarantees our freedom of speech. The state action doctrine was designed to limit the reach of the federal government and undergirds the principle that the US Constitution only applies to state actions, not private actions. 

The Supreme Court, over decades and even centuries, has carefully established and refined the limits that exist on our freedom of speech using the First Amendment as a guide. Government officials are equipped with solid, painstakingly crafted precedents upon which to judge speech. Social media companies do not have this same luxury. We are essentially imploring social media platforms to, within a matter of mere months, build a comprehensive internet speech policy from scratch that can effectively reconcile some of the most divisive online issues.  

It would be unwise and unrealistic to apply the First Amendment to social media platforms. It is imperative that we do not undermine the state action doctrine so as to maintain a necessary zone of privacy and stem government interference in our private lives. However, the First Amendment and the ensuing legal precedents established by the Supreme Court should serve as a guide for social media companies as they are forced to tread the murky waters of internet speech policy. 

Although there are marked differences between private and public forums of speech, the intrinsic purpose of online platforms or public platforms is practically identical: allow citizens space to share their voice and communicate with others. Considering this parallel between the public and private communication sectors, many pundits such as Benjamin F. Jackson call for First Amendment protection to be applied to social media platforms; speech on social media networks “simultaneously invoke three of the interests protected by the First Amendment: freedom of speech, freedom of the press, and freedom of association.” 

As established by a plethora of past court cases, freedom of speech is not an absolute right– certain categories of speech are not protected by the First Amendment. These same unprotected categories of speech manifest themselves on private online forums, and therefore, can be applied to inform social media speech policy, especially in the context of hate speech, political disinformation, and other “harmful” content. 

Five categories of potentially unprotected public speech have particular relevance to social media platforms: obscenity, false statements of fact, speech that incites lawless action, fighting words, and true threats. Especially in the context of the most pertinent issues today that revolve around hate speech, election security, and other harmful content, landmark First Amendment court cases– more specifically the precedents they set– may serve as solid foundations upon which to inform policy responses to these seemingly irreconcilable issues.

Miller v. California (1973) established the tripartite Miller test, which determines whether or not speech or expression can be labeled obscene. The nuanced test aims to stem offensive and excessively lewd content, and it further recognizes that some works have “literary, artistic, political, or scientific value.” The nature of obscene content is similar in public and private realms, and it is, therefore, reasonable to suggest that the Miller test inform social model obscenity policy.

In New York Times Co v. Sullivan (1964), the Supreme Court created the actual malice threshold for public actors and figures, which requires that a public official provide evidence that the defendant made a libelous statement with “knowledge that it was false or with reckless disregard of whether it was false or not.” Cementing this precedent in Gertz v. Robert Welch, Inc. (1964), the high court stated that  “the First Amendment requires that we protect some falsehood in order to protect speech that matters.” 

This framework for limiting falsities developed by the high court is especially pertinent to political disinformation that circulates social media platforms, especially in the context of election integrity concerns. Political advertisements do have the potential to contain false and libelous information about other public actors. Although social media advertisements reside in the private domain, they serve the same purpose as advertisements aired in public. In accordance with the First Amendment, therefore, it would be reasonable to hold online political advertisements and promotions to the same standard proposed by the actual malice test. 

The actual malice test could further apply to disinformation campaigns. Inauthenticity is a reasonable standard upon which to determine whether the content was posted with “knowledge that it was false or with reckless disregard of whether it was false or not.” Accordingly, social media companies would be tasked with enhancing their artificial intelligence systems that detect false accounts. 

Although “speech inciting imminent lawless action,” “true threats,” and “fighting language” constitute separate categories of unprotected speech, these categories are often intertwined in First Amendment speech protection considerations. In Brandenburg v. Ohio (1969) the high court invoked the “imminent lawless action” doctrine. According to this doctrine, speech is not protected under the First Amendment when it is “directed to inciting or producing imminent lawless action” and is “likely to incite or produce such action.” 

As established in Chaplinsky v. New Hampshire (1942), “fighting words” are words which “by their very utterance, inflict injury or tend to incite an immediate breach of the peace.” R.A.V. v. City of St. Paul (1992) challenged the invocation of the “fighting words” precedent. While the court upheld that fighting words are unprotected under the First Amendment, it declared that restricting speech deemed to be “fighting words” is unconstitutional if the utterance is idea-based. In the words of the court, “the First Amendment prevents the government from punishing speech and expressive conduct because it disapproves of the ideas expressed.” The First Amendment does not allow that a restriction of speech censor ideas or base itself on viewpoint discrimination; as famously stated by Justice Harlan in Cohen v. California (1971), “one man’s vulgarity is another’s lyric.” 

Under the First Amendment, the government cannot censor speech based on mere aversion; to do so would institute bias into public discourse and unwittingly ban the expression of generally unpopular views. Unpopular views still maintain a space in public debate. Therefore, similar to the imminent lawless action doctrine, fighting words must incite an immediate violent reaction in order to escape First Amendment protection. Unpopular views similarly circulate private online platforms. It is important that social media platforms consider the precedents set by Brandenburg, Chaplinsky, and R.A.V to guide straddling the fine line between disagreeable ideas and personal attacks, so as to protect users from harm while taking care not to limit the expression of ideas.  

As decided in Watts v. United States (1969), true threats are another form of unprotected speech which “encompass those statements… with the intent of placing the victim in fear of bodily harm or death.” Political hyperbole and statements “made in jest” do not constitute “true threats.” As it relates to recent online events, misinformation about the coronavirus could very well fall under the judicial definition of “true threats,” as phony cures could cause “bodily harm or death” to users. Of course, online platforms pose unique threats such as phishing and spyware that cause serious harm—not necessarily “bodily harm.” In these unique cases, the “intent” aspect of the true threat can still apply to these maliciously implemented fraudulent traps. 

Although purported to maintain a safe and respectful online community, regulations that go beyond limits set by the First Amendment present a slew of unique problems including systematic ideological bias and suppression of minority views. Restricting speech beyond constitutional standards undermines the objective of social media platforms, which is to provide a medium of inter-ideological discussion.

No matter the decisions that Twitter, Facebook, YouTube, and other social media giants make, critics will inevitably continue to find faults in Internet speech policy. However, as speech in the private online realm has become increasingly public, it is reasonable to hold public and private expressions to similar standards/limitations. At the very least, these companies must further commit themselves to transparency and draw upon the precedents set by the First Amendment in order to usher informed policies that carefully tread the delicate balance between freedom and security. 

 

Faith Fisher is a Government and Spanish major at Cornell University. Fisher expects to graduate with a B.A. in 2022. 

 

Suggested citation: Faith Fisher, A 229-Year-Old Handbook for Social Media Companies, JURIST – Student Commentary, July 14, 2020, https://www.jurist.org/commentary/2020/07/faith-fisher-social-media/.


This article was prepared for publication by Brianna Bell, a JURIST Staff Editor. Please direct any questions or comments to her at commentary@jurist.org.


Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.