Rule 3(1)(b), Intermediary Liability, and the Burden of “Reasonable Efforts”
INTRODUCTION
The legal framework governing online intermediaries in India is at a crossroads. Nearly a decade ago, in Shreya Singhal v. Union of India (2015) [“Shreya Singhal”], the Supreme Court [“SC”] established a definitive safeguard for platforms: intermediaries could only be held liable for user-generated content if they refused to comply with a court order or a government notification for removal of the same. This established what has since been called the “actual knowledge” standard, whereby, the intermediary must have “actual” knowledge of the content. It became the foundation of safe harbour protection under § 79 of the Information Technology Act, 2000 (“IT Act”) and has preserved space for free speech without burdening platforms with quasi-judicial responsibilities.
However, this safety net has been steadily eroded. The 2022 amended Rule 3(1)(b) of the Information Technology Act (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, requires intermediaries to exercise “reasonable efforts by itself” to prevent users from hosting or sharing prohibited content. Although this term sounds innocuous, I argue it carries enormous legal uncertainty and poses several questions: What exactly counts as “reasonable effort”? When must it be made, and how much is enough?
The stakes of this ambiguity have only grown as a result of the Bombay High Court Ruling[“HC”] in Kunal Kamra v. Union of India (2024) [“Kamra”]. Here, the Court struck down sub-clause (v) of Rule 3(1)(b), which pertains to government-run Fact-Checking Units [“FCU”]. However, it left untouched the broader “reasonable efforts” obligation upon intermediaries without clarifying its scope.
Adding to this uncertainty is an Right To Information [“RTI”] based investigation report released on 20th June, 2025 by the Internet Freedom Foundation [“IFF”] which reveals that FCUs across the country still remain operational, despite Rule 3(1)(b)(v) being struck down in Kamra. Though FCUs are not backed by statute, the government contends that their notices are valid as long as takedowns are not mandatory. The ambiguity around whether such notices released by the FCUs trigger the “reasonable efforts” obligation places intermediaries in a precarious position to choose between risking their safe harbour in case of non-compliance and protecting free speech. At the same time the government has expanded its content moderation infrastructure through the Sahyog Portal. This enables ministries to issue takedown notices under §79 of the IT Act. While these notices do not carry the binding force of blocking orders under §69A, they are increasingly treated by platforms as de facto mandates especially when paired with the threat of losing safe harbour. The constitutional validity of this practice is under challenge in X Corp v UOI. Much of the existing commentary has focused on Shreya Singhal and the constitutionality of government-run FCUs. Yet within this body of literature, one vital issue has remained largely unexplored: the expanded obligations for intermediaries to make “reasonable efforts by itself”. Introduced through the amendments in 2022, this clause has subtly displaced the Shreya Singhal standard without any sufficient judicial or academic interrogation.
Through this essay I seek to bridge that gap.
I contend that this issue is no longer limited in abstract. For example, in Starbucks v. NIXI [“Starbucks”] a pending case before the Delhi High Court. The main question before the Court is whether intermediaries must actively investigate and remove copyright-violative content to meet the “reasonable efforts” standard. Such an interpretation would obliterate the “actual knowledge” test and turn platforms into quasi-adjudicators of intellectual property disputes without any clear procedural safeguards.
I must clarify that this piece does not aim to resolve the constitutional validity of FCUs or takedown powers. Instead, it focuses on the absence of a workable definition for the term “reasonable efforts” and the chilling consequences that could and have followed from such ambiguity. By analysing how this undefined standard exposes platforms to regulatory overreach and encourages self-censorship, this essay aims to show why acting on it now essential to both digital governance and constitutional rights. This essay is divided into two parts. Part I first revisits the threshold laid down in Shreya Singhal. I argue that the threshold has since eroded. Second, I turn to Kamra, where it is argued that courts and contemporary literature have missed an important opportunity to clarify how this duty should be understood. I further demonstrate how other courts have struggled and, in some cases, refused to define this standard. In Part II, I first examine how these unresolved issues have commercial implications. Second, I examine how they have become a tool to cause unregulated censorship of free speech. I conclude by offering some broad suggestions on how legal policy can better define and constrain this obligation going forward.
The Supreme Court’s Missed Opportunity In Kamra
In Kamra, the Bombay HC struck down Rule 3(1)(b)(v), which required intermediaries to act upon FCU-declared content that was “fake, false or misleading” about government business. The Court found the provision vague and a violation of Article 19(1)(a), particularly since “misleading” is not a constitutionally permissible ground for restriction under Article 19(2).
However, the majority opinion remained silent over the interpretation of the broader language of Rule 3(1)(b) imposing obligation onto the intermediaries to take “reasonable efforts by itself.” I argue that in this silence lies a significant missed opportunity. Although Justice Neela Gokhale’s opinion formed the dissenting opinion, it stands as the only judicial analysis of what “reasonable efforts” actually entails.
Analysing the unamended portion of Rule 3(1) (b) she observed:
“Thus, if an intermediary learns of any offensive information, suo-moto or upon a complaint, it is expected to make ‘reasonable effort’ to cause the user not to host the same.”
The phrase “suo moto or upon a complaint” is especially significant for two reasons.
First, the source of “a complaint” herein has not been qualified. This gives rise to the impression this that the obligation of “reasonable efforts” arises independently of any formal government directive.
Second, which is especially noteworthy is that Justice Gokhale uses the words “or suo moto”. It is argued that this evidence is a strong indication that the obligation of due diligence may not be purely a reactive one as was established in Shreya Singhal. It stands to reason that the words “reasonable efforts by itself” may have created a proactive responsibility to act based on internal content moderation processes.
Further, Justice Gokhale’s also observed that “reasonable efforts” are not linked to the now-struck-down FCU clause. She explains that even under the earlier version of the Rules, intermediaries were expected to show due diligence by making reasonable efforts to not host unlawful content. Failing to do so could result in the loss of safe harbour protection.
This reveals that the “reasonable efforts” standard is not a creature of the now-struck-down FCU clause, and as a standalone obligation, survives Kamra. Intermediaries remain bound to comply, even though the FCU mechanism no longer exists. While Justice Gokhale’s ultimate conclusion on the constitutionality of Rule 3(1)(b)(v) was in the minority, I argue that her analysis of “reasonable efforts” was neither contradicted nor engaged with by the majority. It has been over a year since the decision has come out and contemporary literature on the Kunal Kamra case has been entirely focused on the constitutionality of Rule 3(1)(b)(v), which makes this concern even more pressing.
This raises an interesting dilemma: where the dissenting judge’s reasoning may now serve as the only interpretative source for a vital legal obligation. To elaborate, the context of the Kamra, bench structure here becomes relevant. The matter was initially heard by Justices Gokhale and Patel, who arrived at divergent conclusions on the constitutionality of Rule 3(1)(b)(ii) and (v). This resulted in a reference to a third judge under Clause 36 of the Letters Patent., however it is well-established law that the scope of such a reference is limited strictly to the points of disagreement.
Justice Gokhale’s remarks on “reasonable efforts” were made in relation to the surviving part of Rule 3(1)(b), not the FCU clause per se. Since her colleagues neither endorsed nor rejected this interpretation, it remains an unchallenged and arguably still valid interpretation. At the very least, her reasoning brings to the fore an urgent and unresolved question: What does the threshold of “reasonable efforts” look like in practice? Must every complaint trigger a takedown? Are intermediaries obligated to deploy content monitoring systems? Would a general awareness even in absence of “actual knowledge” be sufficient to impose liability? Would intermediaries suo moto be required to take down content?
Justice Gokhale’s analysis becomes highly instructive in this regard. It suggests that the “reasonable efforts” requirement is far more ambiguous and more expansive than it has been treated so far in Shreya Singhal. This has serious implications for intermediary liability.
This is particularly relevant now, in the wake of RTI disclosures which show that FCUs continue to function informally despite the Court’s ruling. Even if the notices served by the FCUs no longer carry a legal mandate, they may very well trigger the intermediary’s duty to act, which is a possibility under Gokhale’s interpretation.
It is therefore argued that obligation of “reasonable efforts” has been largely ignored in both judicial discourse and public commentary and deserves close scrutiny. Two decisions that add to this doctrinal uncertainty are X v. Union of India and IndiaMart Intermesh Ltd. v Puma Se,. Rather than clarifying the scope of this obligation, these cases deepen the ambiguity. I will demonstrate that together, they raise more questions than answers about what “reasonable efforts” now require and what their chilling consequences may be.
X v. Union of India: Acknowledging a New Standard
The Delhi High Court in X, a year before Kamra sides with Gokhale’s analysis, expressly departing from the Shreya Singhal threshold. Further, it rejected the intermediary’s reliance on Shreya Singhal, which limited takedown to situations involving court orders. The Court reasoned that Shreya Singhal did not consider the amended Rule 3(1)(b), which now requires intermediaries to take “reasonable efforts” to prevent users from uploading prohibited content [X, ¶58]. This duty the court said is independent of actual knowledge and does not require a takedown notice to be triggered. It emphasised that the 2022 amendments increased the burden on intermediaries and widened the grounds for loss of safe harbour.
Though X sides with Gokhale’s reasoning that the 2022 amendment increased the responsibility of intermediaries, it still left two fundamental questions unresolved: First, what does the threshold of “reasonable efforts” actually entail? While courts have acknowledged the obligation, they have provided little clarity on what compliance entails. Must platforms proactively monitor content, use automated flagging, or respond only to user complaints? Second, it remains unclear whether the duty demands action only upon notification or includes a broader, suo moto responsibility to take down content.
IndiaMart and the Reversion to “Knowledge”
A recent decision of the Delhi HC complicates matters further. Whilst X, had acknowledged that the 2022 amendments introduced a higher obligation on intermediaries, the court in IndiaMart has sidestepped that obligation. While it did briefly mention the “reasonable efforts” requirement (¶86), it ultimately chooses not to engage with what that standard actually entails. Instead, the court seems to have reverted back to the “actual knowledge” articulated in My space v. Super Casettes case.
The court avoids addressing the main question that is raised by the amendment: which is whether new proactive obligations must now be fulfilled to retain safe harbour. In ¶94, for instance the court holds that that as long as IndiaMart makes “reasonable efforts” to prevent users from listing infringing content, it cannot be barred from allowing sellers to use brand names like “PUMA” on its platform. However, this conclusion rests on a conceptual vacuum: As to what the obligation of reasonable efforts are, which was left unaddressed in the X case.
What has complicated matters further is the court reliance on the MySpace case which follows the “actual knowledge standard”. However, this interpretation cannot be reconciled with principles of statutory interpretation. It is argued that the Court cannot hold that existing measures [Takedown upon notice and grievance redressal mechanisms] to fulfil “reasonable efforts” as these mechanisms already existed under the earlier framework, well before the 2022 amendment. If they were deemed adequate, what then was the legislative intent behind explicitly introducing a new, standalone obligation to take “reasonable efforts”? Hence, this interpretation is unlikely to hold.
This collapse leads to two major problems:
- Is Knowledge Still Relevant? If “reasonable efforts” is understood to require action irrespective of a notice or actual knowledge, as the IT Rules and Kamra suggest, then the “knowledge” standard from MySpace may be outdated. Further, the judgment dodges this question and refuses to engage with how “reasonable efforts” might now require more after the 2023 Rules.
- What Level of Human Involvement Suffices? MySpace makes clear that automated filters or content modification tools do not, by themselves, constitute knowledge. However, it states that “human involvement in some form may be attributed to knowledge”. [MySpace, ¶37]. Does this mean that platforms who have some level of human oversight along with automated tools may be held liable? IndiaMart does not answer this. Nor does it define what quantum of involvement would be required.
It must be noted that India Mart briefly does acknowledges the risks of an intermediaries being forced to adjudge infringement. It notes that they are likely to overcompensate to avoid commercial risk, effectively enabling private censorship which would undermine free speech protections (IndiaMart, ¶71). While this does reflect a surface-level awareness of the dangers of over-correction, I argue that the court’s engagement is shallow as it failed to interrogate the cause, which is the ambiguous legal standard of “reasonable efforts” introduced by the 2023 amendment.
Filling the Vacuum: MEITY’s Advisory and the Drift Toward Proactive Censorship
Despite concerns about regulatory overreach the Indian courts have so far failed to sufficiently engage with the ambiguity around “reasonable efforts”. This silence has allowed the executive to fill the interpretive void with expansive and potentially chilling effects on free speech. The best example of this is an advisory issued in December 2023 by the Ministry of Electronics and Information Technology. The advisory requires that intermediaries must “also” identify and promptly remove misleading or impersonated content, adding a new proactive burden beyond existing mechanisms.
It states:
“Rule 3(1)(b) mandates intermediaries to communicate their rules, regulations, privacy policy, and user agreement in the user’s preferred language. They are also obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information… This rule aims to ensure platforms identify and promptly remove misinformation, false or misleading content, and material impersonating others, including deepfakes.”
The use of “they also” is significant as it is an express recognition that these “reasonable efforts” are in addition to existing requirements like user agreements or grievance redressal mechanisms. Further, the phrase “aims to ensure platforms identify and promptly remove” signals a move away from a reactive duty (to act upon notice) to a proactive obligation, one where there is an expectation that platforms detect harmful content on their own. This in combination with vague terms like “reasonable efforts,” which have been interpreted by courts such as Kamra to mandate suo moto cognizance ultimately creates an environment of reasonable apprehension amongst intermediaries to over-correct, just to stay legally safe. Hence it is argued that the lack of engagement from the judiciary with the obligation of reasonable efforts especially in light of such expansive interpretations by the government may have disastrous consequences to free speech
Taken together, though Courts have acknowledged the importance of “reasonable efforts” they have stopped short of defining it. The executive in contrast, has expanded its interpretation without sufficient legislative support. This has left platforms caught in the middle as they are forced to navigate this ambiguity, often having to choose between risking liability or censoring lawful content.
These consequences however are not limited in abstract. As I will demonstrate in Part II, this legal ambiguity is already having effects both in commercial disputes and in the public sphere, where state-backed “fact-checking” mechanisms continue to exert a chilling influence on speech.
PART II
The Cost of Uncertainty: Commercial and Free Speech Risks Under Rule 3(1)(b)
In Starbucks ,a pending case before the Delhi HC, the main question is whether intermediaries must actively investigate and remove copyright-violative content to meet the “reasonable efforts” standard. With this second part of the essay, I aim to move beyond the abstract to explore two broad issues that this uncertainty has caused: First, I examine the Starbucks dispute to highlight how intermediaries are being pushed into quasi-judicial roles. Second, I explore the growing role of state-backed “fact-checking” mechanisms and informal takedown mechanisms like the SAHYOG Portal to highlight how this ambiguity has led to over-compliance and effective censorship in the name of diligence. Together, I will establish that these examples illustrate why defining “reasonable efforts” is not just a housekeeping concern; instead, it has become essential to protect both fair business practices and democratic speech online.
1. The Commercial Risk: Intermediaries as IP Adjudicators
In the Starbucks , Starbucks argues that intermediaries are required under the “reasonable efforts” clause to proactively identify and prevent trademark-infringing domains.
If accepted, this interpretation would effectively turn intermediaries into de facto IP adjudicators. Instead of waiting for court orders or any legal process, platforms would have to make complicated decisions about trademark infringement on their own at risk of losing safe harbour protections under §79.
In a 14 July 2023 order, the Delhi HC flagged two serious issues with this approach:
- First, there is no definition of “reasonable efforts” in the statute or accompanying government guidance. This is “shrouded in obscurity and would be a fertile ground for litigation and uncertainty in the mind of the public.” [Starbucks, ¶13].
- Second, the Court expressed concern over requiring intermediaries to make legal determinations on infringement, a function they observed was squarely within the judiciary’s domain.
Acknowledging these risks, the Court had directed MeITY to clarify how the intermediary can be conferred the power to decide the infringing nature of information posted on its website. Further the court acknowledged that it would have to clarify the scope of “reasonable efforts”.
Accordingly, MeITY issued a press release in December 2023. The advisory states that intermediaries must also “identify and promptly remove” unlawful content placing a proactive obligation beyond merely responding to takedown systems.
However, this clarification has only deepened the confusion instead of resolving the underlying issue. While MeITY affirmed the existence of a proactive duty, it still failed to explain how intermediaries could be conferred the power to determine what is infringing or not. Absent any guidance, I argue that the burden of this interpretation remains entirely on the platform without any procedural safeguards.
Until the courts provide a substantive framework, intermediaries continue to be stuck in a bind: They either risk liability by doing too little, or take down questionable content pre-emptively to be “safe” even if it turns out to be lawful. I argue that this dynamic creates a chilling effect not only on commercial innovation but also on any lawful form of expression, parody, competition and even criticism in violation of Article 19(1)(a) especially in such high-stakes IP matters like the Starbucks case.
2. The Public Interest Risk: Fact-Checking and the Return of Censorship by Proxy
The ambiguity surrounding “reasonable efforts” has equally troubling implications for public discourse. As revealed by the IFF’s RTI disclosures, even post Kamra several state governments have quietly still retained or constituted FCUs. These bodies now operate without a statutory basis, relying instead on informal advisories or administrative directions. More importantly they no longer compel intermediaries to takedown content.
However, they probably do not need to.
This is because Rule 3(1)(b), which remains operative, still does require intermediaries to make “reasonable efforts” to prevent the dissemination of unlawful content. When a government FCU flags content as false or misleading, that alone may be enough to put an intermediary “on notice.” Faced with uncertainty over what qualifies as a “reasonable effort,” many platforms may take down the content simply to avoid the risk of losing their safe harbour.
I argue that this would amount to a workaround of the prohibition laid down in Kamra. The executive may no longer mandate takedown, but it can still indirectly pressure intermediaries into censorship through plausible deniability.
Another recent example is the SAHYOG Portal. This is a centralised interface now used by government departments to issue takedown notices under §79(3)(b) of the IT Act. Originally, it was conceived during the COVID-19 pandemic as an emergency coordination tool. However, SAHYOG has since evolved into a de facto censorship pipeline. Through it, law enforcement agencies can issue removal notices en masse, often with minimal transparency, no opportunity for hearings, and no public record of the rationale behind the action.
In submissions before the Karnataka HC, the government defended SAHYOG by arguing that online platforms are fundamentally different from traditional media outlets. Unlike news editors or TV producers who would make deliberate choices, social media platforms on the other hand rely on opaque algorithms operating at speed and scale. For that reason, the Centre has argued that a broader removal mandate under §79 is necessary. This, they argue, would allow agencies to target harmful content more efficiently.
However, this means that platforms can lose safe harbour simply for failing to act on these opaque digital notifications without a hearing, or reasoned order causing platforms to comply in fear. I argue that this instils valid concerns that these consequences would nudge intermediaries into acting pre-emptively as the risk of doing too little is higher than the cost of doing too much. So, while Kunal Kamra aimed to prevent censorship by curtailing state power their failure to engage with the obligation of “reasonable efforts” may just have derailed the purpose of what they strived to curb.
Suggestions and the Way Forward
To substantively address the confusion around what “reasonable efforts” mean India would need to move beyond such a vague standard around intermediary liability. It must adopt a more rights – centric approach to intermediary regulation. As this piece has shown, the ambiguity over liability pushes platforms toward over-censorship. This weakens user rights, and punishes smaller intermediaries the hardest.
Before offering any broad recommendations, it is essential to clarify the limited scope of this piece. This essay does not attempt to propose a comprehensive regulatory overhaul of intermediary liability. Such an endeavour would require extensive stakeholder consultation and legislative deliberation, both of which are beyond the remit of this analysis.
Instead, the objective here is to demonstrate the pressing need to define the contours of “reasonable efforts” and in its absence the serious implications it could potentially have over commercial interest and citizen’s rights in India.
So, while this is not a blueprint, I offer below a few suggestions aimed at moving the conversation forward.
- What Reasonable Efforts Entail: The concept of “reasonable efforts” should not be left open-ended to subordinate legislation such as government notifications and rules. Parliament must take the lead in clearly defining the term possibly either through an amendment to the IT Act or through the new Digital India Act Law. In any case the Courts have to step in and engage with the obligation and provide an interpretation that will balance user rights and the need for content moderation.
- Adopt a Tiered and Function-Specific Framework: This piece demonstrates the issues with the current intermediary liability regime in place. I believe it deserves revisiting. The Indian government has already begun steps towards doing so.
In line with that, India should seriously consider borrowing from the EU’s Digital Services Act (DSA) and avoid a one-size-fits-all approach. Platform obligations must be sensitive to size, audience, and function. Larger platforms with greater reach should be held to a more rigorous standard of moderation and transparency duties, while smaller or infrastructure-level intermediaries should not be expected to meet the same thresholds.
- FCUs and Portals Like SAHYOG Within Legal Oversight: All means of informal censorship. Whether that is through FCUs or portals like SAHYOG, have to be regulated by a statute. They cannot operate in a legal grey zone.
- Standard Operating Procedures need to exist: In line with the Delhi High Court’s order in Starbucks, regulators need to issue binding and comprehensive SOPs that set out what compliance with Rule 3(1)(b) looks like. This would have to involve what timelines apply, what obligations are involved, which categories of content are covered, and what tools may be used. These SOPs need to be developed through open consultation with platforms, civil society, and the legal fraternity to ensure it’s holistic.
CONCLUSION
What began as a vague statutory phrase may potentially evolve a chilling means to digital control in India. Courts have largely sidestepped the task of interpretation, the executive has expanded the term’s reach through advisories, and intermediaries are left operating in a climate of legal opacity and risk. Over the course of this piece, I haves shown how this vague obligation has been sidestepped by the courts and expanded through the executive by means of advisories and government agencies. Across both commercial and constitutional domains, the fallout is visible. This moment calls for more than just a patchwork advisory or reactive litigation. We need a comprehensive revisit over regulations that deal with the intermediary and digital space. The Upcoming Digital India Act may just provide that opportunity. But until then courts cannot not shy away from doing what they have so far avoided.
[* The author is a Fourth-Year student studying B.Sc. LL.B. (Hons.) at the National law Institute University, Bhopal]