How France is Battling Online Hate Speech with a New Bill Commentary
Life-Of-Pix / Pixabay
How France is Battling Online Hate Speech with a New Bill

On Wednesday, French lawmakers passed a bill to regulate online hate speech in the country. The law seeks to impose a strict obligation on online platforms to remove offensive content, like hate speech, violence, or racism, within twenty-four hours or risk getting fined and blocked. The over-zealous attempt to regulate the content available across online platforms comes with its own set of drawbacks that result in the cure being more problematic than the disease.

Tearing the Fabric of “Freedom of Speech”

The foremost concern that arises from a law imposing a strict deadline to regulate online content is: what will be its implications on the freedom of expression? Categorizing content based on its impact in propagating hate speech involves an analysis of data to determine the context in which such material got uploaded.

In comparison to the US jurisdiction, France authorizes freedom of speech subject to legislative limitations. Article 10 of the Declaration of Human and Civic Rights states that “No one may be disturbed on account of his opinions, even religious ones, as long as the manifestation of such opinions does not interfere with the established Law and Order.” In addition to Article 10, Article 11 further states that:

“The free communication of ideas and of opinions is one of the most precious rights of man. Any citizen may therefore speak, write and publish freely, except what is tantamount to the abuse of this liberty in the cases determined by Law.”

France is also subject to Article 10 of the European Convention on Human Rights which protects the right to freedom of speech, but also acknowledges the state’s power to limit it.

What is to be noted is the extent to which limitations can be imposed on such free speech. In a 2013 case, the European Court of Human Rights (ECHR) ruled that the conviction of a protester under a piece of French legislation for insulting the President of France was contrary to the provisions of Article 10 of the European Convention on Human Rights. In the case of Thorgeir Thorgeirson v. Iceland, the ECHR ruled that the imposition of a fine following the publication in a daily newspaper of incidents of police brutality was not proportionate to the legitimate aim of “protecting the reputation of others.” The court went on to hold this violated Article 10. Further, in the case of Jersild v. Denmark, the ECHR found that convicting a journalist for granting an interview to certain individuals, which led to the dissemination of certain racist remarks, violated Article 10. The court stated that:

“The punishment of a journalist for assisting in the dissemination of statements made by another person in an interview would seriously hamper the contribution of the press to discussion of matters of public interest and should not be envisaged unless there are particularly strong reasons for doing so”.

It can be very well noted, from the above case laws, that the ECHR maintains a high-regard towards the protection of the freedom of speech especially through journalism and, therefore, has come down strictly on countries violating it.

The recently passed French bill imposes a 24-hour strict window and provides for varying impact on online platforms, as one can notice varying levels of ability to deal with such context-specific content among the platform operators. Companies considered to be Internet giants have various AI-based algorithmic technologies and employing specialized staff to identify and curb abuses across the online space. However, it is pertinent to mention that such usage of automated moderation tools, such as AI-based technology, runs the risk of identifying false negatives and false positives. But then that would result in putting a heavy onus on the platform operators to constitute the legality and the illegality of the content based upon certain basic parameters provided by the law. What comes to be of more significance is the burden imposed by such a strict deadline on platforms sans the resources of the giants. A quicker path for such small-time online platforms would be to base the act of content removal entirely on their Terms of Service. Terms of Service as a mode to regulate content puts the entire onus on the conscience of the platform operators to deem what is to be acceptable and what is not.

It is to be particularly noted that the courts have on occasion, had to deal with the idea of what does and what does not constitute “hate speech.” In the case of Gündüz v. Turkey, a sect leader was convicted of propagating hate speech on the basis of his discrediting a particular practice and his defense Sharia Law. The ECHR held that the defending of the practice of Sharia, without calling for action to violate the peace of the country could not be stated to be a violation of Article 10 under the head of “hate speech.” In Birol v. Turkey, the ECHR held that the making of fascist remarks to insult the Minister of Justice did not amount to any form of incitement of violence and therefore did not amount to “hate speech.”

The above laws signify how time and again the courts had to intervene to disseminate as to what would and what would not constitute hate speech. In France, by dangling the threat to fine 4% of their global revenue, the bill pushes the online platforms to over-remove contents on the limited understanding of what constitutes hate speech. Contents that are supposed to be perfectly legal would end up getting pushed to oblivion citing illegal content. If the removal process leads to certain materials remaining available over the online platform, it would result in the severe wrath of the French Government. This exercise would, therefore, result in a fundamental breach of the idea of “freedom of speech” and would be fatal to the existence of a vibrant democracy.

Violating the EU E-Commerce Directive

Enforcing a bill to impose the burden of proof on the platform operators runs the risk of violating Article 14 and Article 15 of the EU E-Commerce Directive. Article 14 of the EU-E-Commerce Directive puts an obligation on the Member States to ensure that no service provider is held liable for the information stored. The Directive protects a service provider who does not know of the existence of the illegal information and if, on knowledge of the presence of such information, it acts expeditiously to ensure its removal. Imposing a strict fine on platform operators who fail to remove content, deemed to be illegal by the French authorities, runs in contravention to the ideals enumerated by the Directive. In ensuring full compliance, the French authorities failed to take into account the distinct capabilities across a vast plethora of online service providers. Article 15 of the EU E-Commerce Directive states that:

“Member States shall not impose a general obligation on providers, when providing the services covered by Articles 12, 13 and 14, to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity.”

It can be easily deciphered that the very objective of the French bill leads to a direct violation of Article 15. Through Article 15, the EU sought to protect the service providers from being subject to onerous obligations at the hands of the member states to ensure the content on their platforms.

Risk of Litigation

Such online platform operators or service providers now face a double-edged sword. On one hand, the operators face the risk of fines and further sanctions if they are not able to comply with the guidelines imposed by the authorities. On the other hand, in the race to comply with the guidelines, the platform operators face the risk of facing litigation based on violations of freedom of speech and expression. As discussed above, the exercise of content moderation is highly context-based and continuously runs the risk of over-moderation and over-removal. Any method adopted towards regulating hate speech will not always deliver a perfect result and will run the risk of removing legal content off the internet. Such regulatory activities, which put a high onus on the platform operators, make them liable to “right” based litigation and court fines. Thus, putting a hefty dent on the business model of such online enterprises.

The Scope of an Alternative

Following the “Facebook Mission” Report Model

In May 2019, a team of experts submitted a high-level report on the regulation of social networks to curb online hate speech. The report promoted an accountability-based model to impose a legal obligation on social networks. The report claimed that social networks were best placed to assume the moderation task of online content. Such a moderation task carried out by the social networks will be based on a transparent model that will highlight the terms of usage, the mechanisms adopted, and the dissemination method applied.

The report, as mentioned, provided for a transparent moderation procedure, which was coupled with the tag of accountability. The method calls for content moderation at regular intervals and in a timely instead of imposing a strict deadline to categorize the data available over such networks better. A moderated approach to handling online content would do a great deal in ensuring that the legal content is not taken down under the garb of containing hate speech.

The “Unbundling” procedure.

The “unbundling” procedure requires that the online platform operators would open up unregulated versions of the service or the platforms operated by them. This would ensure that the content or the service uploaded by the competitors gets screened through differential ranking and content policies at the instance of the competitors. This process shifts the burden to regulate the content on the platform from that of the platform operator to the respective competitors in the market. Additionally, the procedure promotes users to choose content based on their choice and get protected from contents that might have an adverse effect on their lifestyle.

The ongoing COVID-19 pandemic has also resulted in an epidemic of hostility with reports of rising hate speech instances coming from across the globe. But curbing a crisis cannot come at the expense of fueling another one. Countries like France, Ethiopia, and Nigeria, among many others, need to be careful in their noble efforts to curb hostility amongst the humankind. For, the betterment of society can never come by sacrificing “freedom of speech.”

 

Eeshan Mohapatra is a third-year B.A. LL.B. (Hons.) student at NALSAR University of Law in Hyderabad, India.

 

Suggested citation: Eeshan Mohapatra, How France is Battling Online Hate Speech with a New Bill, JURIST – Student Commentary, May 19, 2020, https://www.jurist.org/commentary/2020/05/eeshan-mohapatra-france-hate-speech-bill/.


This article was prepared for publication by Tim Zubizarreta, JURIST’s Managing Editor. Please direct any questions or comments to him at commentary@jurist.org


Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.