Introduction
Intermediaries are platforms that facilitate information across the Internet between the creator of content and the consumer of data. They primarily relate to businesses that mediate transactions and put third parties in touch with one another through the use of the internet.
Given the easy access and widespread availability of social media platforms teamed with their capability to disseminate information within a matter of seconds, arises the question of the liability of such intermediaries for providing a platform to publish harmful content or to act as secondary publishers of the said content. It also raises concerns regarding violations of freedom of expression and the right to privacy as a consequence of imposing liability on social media intermediaries.
Models Of Social Media Intermediary Liability
As elucidated in the previous article, in order to prevent the potential misuse of social media to spread violent messages, comments, and hateful speech, governments of different states have taken up the responsibility of regulating these social media intermediaries. Before delving into the global analysis of regulating internet intermediaries, it is imperative to look into the different models of liability, broadly classified based on the laws and regulations of jurisdictions across the world.
Based on the legislative and regulatory framework of internet intermediaries across jurisdictions, the models of intermediary liability have been grouped into three broad categories:
A. Strict Liability Model
Under the Strict or Blanket Liability Model, the intermediary becomes liable for content posted by a third-party, even if it is not aware about the illegal nature of that content. Under these conditions, the only way to protect oneself from legal responsibility is to actively monitor, filter, and remove content that is “likely” to violate intellectual property rights. However, even after monitoring and removing content, an intermediary may still be held liable for infringement if it turns out that some of the content was missed.
All intermediates, regardless of their size or role, are held accountable under blanket liability regimes since these regimes do not differentiate between active and passive intermediaries. Should they fail to adequately monitor user behaviour, remove content, or report infractions, they may be subject to financial penalties, criminal liability, and the revocation of any business or media licences they hold.
B. Conditional Liability Model
Under the Conditional Liability or the ‘Safe Harbour’ Model, if certain conditions are met, the intermediary may be exempted from liability for third-party content. These conditions include removing content as soon as they are notified of it, notifying the content creator of the infringing material, and disconnecting repeat infringers as soon as they are notified. It is possible for an intermediary to be held accountable for damages if they fail to comply with these criteria. The ‘safe harbour’ approach differs from the ‘strict liability’ model in that it does not need intermediaries to actively monitor and filter content in order to protect themselves from legal responsibility.
The ‘notice and takedown’ variant of conditional responsibility has been called into question due to the fact that it is simple to exploit and promotes self-censorship by putting intermediaries in the position of being responsible for determining the legitimacy of content in a quasi-judicial capacity. The ‘notice-and-takedown’ policy actually incentivizes intermediaries to delete information quickly after getting notice, rather than expending resources to examine the veracity of the request and risking a lawsuit over doing so. As a direct consequence of this, legitimate content may wind up being restricted.
C. Broad Immunity Model
Without making a distinction between the function of the intermediary and the kind of content being distributed, this approach absolves the intermediary from responsibility for a wide variety of content produced by third parties. The approach treats intermediaries not as “publishers,” who are accountable for the content that they distribute even though it was produced by someone else, but rather as “messengers,” who are not responsible for the content that they carry and convey.
Global Comparative Analysis
States across the world have laws under which intermediaries can be exempted from liability for user’s content on their platforms. The responsibility for content violations is typically assigned according to a number of different laws in regions of the world when there is no specific legislation in place. The vast majority of states have laws that impose culpability on the basis of knowledge of the unlawfulness of content, and content that can be removed is frequently tied to a country’s penal code.
Certain categories of illegal content, such as pornography, particularly that which involves minors, offences endangering state security, such as content related to terrorism, and decency laws, which frequently have expeditious timelines attached for removal, receive a heightened level of attention from authorities in all relevant jurisdictions. In order to better understand the level of responsibility attached to Intermediaries as per the Indian model, we need to look into the laws and regulations of different countries that have devised their own mechanisms to regulate intermediary liability.
European Union
Europe confers a safe harbour for intermediaries pursuant to the Electronic Commerce Directives, which the UK implemented in the E-Commerce Regulations. Under the Directive, intermediaries which only act in the capacity of storing third-party content such as social networking providers are provided safe harbours for conditional liability. The liability for the intermediary arises when
a) the intermediary has knowledge of its unlawful content; and
b) it fails to remove the content despite being aware of its illegal or unlawful nature. A typical social networking provider, such as Facebook, Twitter or Reddit, which does not tend to create content, but simply provides a platform for users to post content, falls under this domain.
In the case of Bunt v. Tilley, the plaintiff filed a suit against the person who made defamatory remarks against him as well as the ISPs that published them. Justice Eady, referring to the ISPs as ‘passive mediums of communication’, held that they were mere conduits, not secondary publishers; and thus, could not be held liable.
Although the E-commerce Directive paved the way for several conventions and guidelines for intermediary liability, the legislation faces certain lacunae on numerous fronts. Terms including ‘expeditiously’, ‘actual knowledge’ and ‘illegal content or activities’ have not been properly defined in the framework, however subsequent judgements by the European Courts explained these terms in a more comprehensive manner; thereby adding to the precision and flexibility of the legislation.
United States of America
The protection provided to internet intermediaries in the U.S. is entailed under the Communications Decency Act, 1966. As per Section 230 of the said legislation, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In Zeran v. AOL, the court concluded in reference to Section 230 that “liability upon notice reinforces service providers’ incentives to restrict speech and abstain from self-regulation.”
In Dart v. Craigslist, Inc., Craigslist, an online classified advertisement service in the US, listed various items and services. The listings included a section for “erotic services”. This caught the attention of law enforcement, which sent a notice to Craigslist to remove the concerned ads, which Craigslist complied with. Sometime later, Thomas Dart sued Craigslist, holding it responsible for interfering with the public’s health and safety. The Court held that Craigslist was an intermediary and was immune from wrongs committed by third parties.
This suggests that the interpretation of the section is consistently applied, thus, this provision is considered to be one of the most influential weapons for shielding intermediaries from any arbitrary and unnecessary liability for the activities of third parties, thereby building an internet utopianism free from any intervention.
Brazil
Brazil passed the Marco Civil, popular as the “constitution for the internet”, as it puts forward a rights-based approach to regulating the internet:
- Provider of an internet connection is not civilly liable for 3rd party- user content.
- Liability can only be incurred in cases of failure to compliance with court orders.
- Exception for what is known as “revenge pornography”: privacy violations caused due to non-authorized sharing of content that contains nudity or is of a sexual nature. It is imperative to note intermediaries can still only be held liable if they fail to timely disable access to the content, after having received notice for the same.
The point of distinction between the Marco Civil model and others is that a host’s risk of liability is not based on actual or constructive knowledge. The difficulty in putting it into practice stems from the high volume, low value aspect of many disputes. For claimants without the legal expertise or financial resources to file an application, the court system may be overworked or the process may act as a barrier to accessing justice.
Critical Analysis With The Indian Regime
Having analysed the global laws to regulate and monitor internet intermediaries, including social media platforms, it is of the essence to determine the reasonableness of the Indian regime in light of the global context. While there exist substantial differences among the models prescribed by several jurisdictions, the element of ‘actual knowledge’ is common to the regimes of all jurisdictions. The absence of the same essentially shifts the burden of determining the legality or illegality of the content on to the intermediaries, which generally lack the capacity to determine the same.
As this task properly falls within the expertise of judicial or executive authorities, intermediaries should only be taken to know about the illegality of their user content upon receiving notice from relevant authorities. Leaving it on an intermediary to decide the legality of its user content is susceptible to abuse as it lacks elements of due process and the same would diminish the ability of the Internet to perform its function as the key means by which individuals exercise their freedom of expression.
With the intermediary guidelines entering the picture, the Government has acquired significant power to regulate social media and online news. Parts of these Rules have been condemned for being an attempt to ‘curb criticism’ of the government on the internet. Further, necessitating the intermediaries to “proactively monitor” their content by removing or disabling public access to unlawful content not only marks a serious threat to the free speech and privacy of users but also impairs the smooth functioning of intermediaries by directing it to regulate and filter every piece of content posted on their platforms.
On the brighter side, these guidelines have necessitated intermediaries to crystalize the user’s responsibility of not displaying, modifying or storing information which is obscene, pornographic, invasive of one’s privacy or pedophilic. Furthermore, intermediaries would be required to remove non-consensual intimate images within 24 hours of the receipt of the complaint; thereby creating a safer space on the internet for users of OTT platforms and paving the way for the prevention of harassment on social media platforms. Additionally, these rules have provided a lucid and unambiguous division of the kinds of intermediaries; namely intermediaries, social media intermediaries and significant social media intermediaries; the level of obligations for which vary according to their classification.
However, the formulation of these rules without public consultation, coupled with the fact that the rules were introduced under the purview of the IT Act’s extension and not through a Parliamentary Enactment has resulted in significant controversy. The far-reaching implications on freedom of speech and the right to privacy add up to the already controversial situation.
It is thus crucial to devise a mechanism, which ensures that the Intermediaries do not have to bear the brunt of the content posted on their platforms. Inspiration can be drawn from the models of other jurisdictions to amend the Intermediary Rules, to the extent that a balance is maintained between national security and the fundamental rights of the citizens as well as the Social Media Platforms. The stake of the Government in the supervision and oversight of these platforms should be significantly reduced, so that the former does not establish itself as a watchdog of the latter, as the same would only foster a lack of free speech and expression.
This article is written and submitted by Chhavi Singla during her course of internship at B&B Associates LLP. Chhavi is a B.A. LLB (Hons) 4th year student at Rajiv Gandhi National University of Law.