Online Hate Speech in India: Balancing Free Speech and Regulation – A Legal Analysis

Introduction

The digital revolution has redefined communication by making it faster, more accessible, and widespread. In India, with over 800 million internet users and one of the largest social media markets in the world, this transformation has brought both opportunities and challenges. The promise of online platforms to democratize information and enable diverse voices is often overshadowed by the proliferation of hate speech, misinformation, and polarizing narratives.[1]

Hate speech in the online sphere is more insidious than its offline counterpart due to the potential for anonymity, virality, and permanence. It can take the form of text, images, videos, memes, or audio messages and often targets individuals or communities based on religion, caste, gender, ethnicity, or political beliefs.[2] What distinguishes online hate speech is its ability to go viral within seconds, creating mass impact and, at times, inciting real-world violence as witnessed in instances such as the Delhi riots (2020) and the targeted mob lynchings allegedly triggered by fake WhatsApp forwards.[3]

In India, there exists no single, comprehensive legislation defining or regulating hate speech. Instead, the legal framework is scattered across various penal provisions, constitutional principles, and regulatory guidelines. Statutes such as the Indian Penal Code, the Information Technology Act, and judicial interpretations collectively attempt to delineate the boundary between protected expression and punishable speech. However, the absence of clear definitions, procedural safeguards, and uniform enforcement has resulted in inconsistent judicial outcomes and heightened concerns regarding potential encroachments on the right to free speech.

Constitutional Framework

The foundational basis for regulating speech in India lies in Article 19(1)(a) of the Constitution, which guarantees to every citizen the right to freedom of speech and expression.[4] This right forms the bedrock of democratic discourse, enabling individuals to freely express opinions, criticize government actions, participate in debates, and contribute to the cultural and political life of the nation. However, the framers of the Constitution also recognized the need to impose limits on this freedom in the larger interest of public welfare.

Article 19(2) of the Constitution enumerates specific grounds on which the State can impose “reasonable restrictions” on the exercise of the right to free speech. These grounds include the sovereignty and integrity of India, the security of the state, friendly relations with foreign states, public order, decency or morality, contempt of court, defamation, and incitement to an offence. Importantly, the term “reasonable” indicates that not all restrictions are permissible only those that meet certain legal and constitutional thresholds.[5]

In the context of hate speech, the Supreme Court of India has consistently held that speech loses its constitutional protection under Article 19(1)(a) when it transgresses the boundaries laid down in Article 19(2). In its 2023 pronouncement in Amit Jani v. Union of India, the Court unequivocally stated that expressions which incite violence, propagate enmity between different groups, or disrupt public order can be legitimately regulated.[6] The Court further highlighted that hate speech attacks the dignity, safety, and equality of individuals, especially those belonging to vulnerable or marginalized communities, thereby necessitating strict legal scrutiny.

Furthermore, any restriction on speech must satisfy the three-pronged test of legality, necessity, and proportionality. Firstly, the restriction must be grounded in an existing law ensuring that arbitrary executive action cannot curtail speech rights (legality). Secondly, the law must pursue a legitimate aim, such as preventing communal violence or protecting public order (necessity). Thirdly, the restriction must be the least restrictive means to achieve that aim without disproportionately infringing on individual rights (proportionality).

In Shreya Singhal v. Union of India (2015), which struck down Section 66A of the IT Act, the Supreme Court reinforced the importance of this proportionality test. The Court noted that vague and overbroad laws can lead to the chilling effect, where individuals refrain from expressing legitimate opinions out of fear of legal repercussions.[7] This principle is particularly relevant in the digital age, where overregulation of online speech can silence dissent and discourage civic participation.

Hence, while the Constitution allows the regulation of hate speech, such regulation must not be wielded as a tool for censorship or political suppression. A balance must be struck where hate speech is tackled firmly, but legitimate expression especially criticism of those in power is safeguarded. This delicate balance lies at the heart of India’s constitutional democracy and informs both legislative drafting and judicial review of laws aimed at curbing online hate speech.[8]

A significant obstacle to regulating online hate speech in India lies in the absence of a precise, comprehensive, and technologically adaptable legal definition. Unlike jurisdictions such as the United Kingdom or Canada which have incorporated statutory definitions of hate speech tailored to modern communications technology India continues to rely on a patchwork of colonial-era penal laws. The Indian Penal Code (IPC), enacted in 1860, contains provisions such as:

Section 153A: Punishes the promotion of enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and acts prejudicial to the maintenance of harmony.[9]

Section 295A: Penalizes deliberate and malicious acts intended to outrage religious feelings.[10]

Section 505(1)(b): Criminalizes the publication or circulation of content that may cause fear or alarm to the public or disturb public tranquility.[11]

While these sections provide some tools to address hate speech, none of them offer a specific or exhaustive definition tailored to digital content or social media virality. As a result, enforcement often becomes subjective and inconsistent, leading to allegations of misuse or selective targeting.[12]

Judicially, India’s understanding of hate speech has evolved through case law and interpretative tests. Courts have identified hate speech as expression targeting individuals or groups based on identity markers—such as religion, caste, gender, ethnicity, or sexual orientation—with the intent to incite violence, discrimination, or hostility. In Pravasi Bhalai Sangathan v. Union of India, the Supreme Court acknowledged the inadequacy of existing laws to address modern forms of hate speech and referred the issue to the Law Commission. The 267th Report (2017)[13] subsequently proposed introducing Sections 153C and 505A to specifically criminalize hate speech.

The judiciary has also developed key tests to determine when speech becomes punishable. In Romesh Thapar v. State of Madras (1950), the Court formulated the “proximity test,” holding that restrictions on speech are valid only when a direct and proximate nexus exists between the speech and public disorder. This principle aligns with the U.S. Supreme Court’s decision in Brandenburg v. Ohio (1969), which protected speech unless it incited “imminent lawless action.” Adopting a similar approach, the Supreme Court in Shreya Singhal v. Union of India (2015) struck down Section 66A of the IT Act, ruling that vague and broad restrictions on online speech were unconstitutional. The Court reaffirmed that only speech directly inciting violence or disorder may be legitimately curtailed—marking a pivotal moment in India’s digital free speech jurisprudence.

In practice, these doctrinal safeguards are often disregarded. Law enforcement authorities have at times invoked hate speech provisions against individuals for speech that is merely critical or satirical. Stand-up comedians, journalists, and activists have been prosecuted for allegedly hurting religious sentiments, even without intent or likelihood of incitement. Such selective and excessive enforcement undermines public trust in the rule of law and fosters a chilling effect, prompting self-censorship out of fear of legal action.

The rise of online hate speech in India poses a complex challenge, requiring a balance between the fundamental right to free speech under Article 19(1)(a) and the reasonable restrictions under Article 19(2) concerning public order, morality, and national integrity. To address this, India follows a dual-tier legal approach that regulates both the content of speech and the medium of dissemination through the Indian Penal Code, 1860 (now Bharatiya Nyaya Sanhita, 2023) and the Information Technology Act, 2000, along with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

Under the IPC, hate speech is criminalised primarily to prevent communal disharmony and public unrest. Section 153A penalises the promotion of enmity between different groups on the basis of religion, race, place of birth, language, caste, or community, whether by spoken or written words, signs, or visible representations. This provision is significant in curbing divisive speech, especially in India’s socio-religious context where communal tensions can escalate quickly. The section not only criminalises the act of promoting hatred but also punishes acts prejudicial to the maintenance of harmony. Punishment includes imprisonment of up to three years, a fine, or both.[14] Similarly, Section 295A targets deliberate and malicious acts intended to outrage the religious feelings of any class by insulting religion or religious beliefs. This section was introduced as a response to communal tensions during the colonial period and remains relevant in today’s context where digital platforms often become battlegrounds for religious provocation. It carries a similar punishment.[15] In addition, Section 505 penalises those who make, publish, or circulate statements, rumours, or reports that are likely to incite public mischief, panic, or hostility between communities.[16] This includes speech that could lead to violence or public fear. These IPC provisions focus on the mens rea (intention) of the offender and the potential to incite violence or disturb public peace.

While the IPC focuses on the substance and impact of hate speech, the Information Technology Act, 2000 governs its online dissemination. Section 69A of the Act empowers the Central Government to block public access to online content in the interest of national security, public order, or to prevent incitement of offences. [17] This provision forms the legal basis for online censorship, especially during communal unrest or emergencies. Procedural safeguards under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 mandate due process and review, though concerns persist over the transparency and accountability of this mechanism.[18]

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 strengthen India’s digital regulatory framework by imposing obligations on social media and digital platforms. They classify intermediaries into regular and “significant” categories, requiring the latter to remove unlawful content within 36 hours of official notice and to appoint compliance officers. The Rules also mandate grievance redressal, message traceability in specific cases, and adherence to a Code of Ethics for digital media and OTT platforms. Non-compliance can result in the loss of “safe harbour” protection under Section 79 of the IT Act.[19]

The regulatory distinction is clear: IPC provisions criminalize individual acts and intent behind harmful speech, while the IT Act and Rules regulate platforms, making them responsible for curbing illegal content. Yet, regulating online hate speech in India faces major constitutional and practical hurdles. The absence of a clear statutory definition of “hate speech” leads to ambiguity and selective enforcement. Provisions like Sections 153A and 295A IPC are criticized as overbroad and susceptible to misuse, often curbing legitimate expression. Social media platforms face delays in compliance due to jurisdictional and data protection conflicts, while opaque executive takedown orders under Section 69A raise serious transparency and accountability concerns.

The regulation of online hate speech in India is governed by a dual legal structure that targets both the content and the medium. While the IPC focuses on punishing hate-driven speech and intent, the IT Act and Rules ensure that digital intermediaries act swiftly and responsibly in monitoring and removing such content. The challenge lies in ensuring that these laws are enforced in a non-arbitrary, transparent, and constitutionally compliant manner, so that the right to free speech is not curtailed unduly, while still protecting the public interest and communal harmony in an increasingly digital society.

Recent Legislative Development:

Karnataka Hate Speech Regulation and Accountability Bill, 2025

In a groundbreaking move, the Karnataka state government passed the Hate Speech Regulation and Accountability Bill, 2025, marking one of the first serious efforts by a state in India to legislatively define and regulate online hate speech.[20] The rise of digital platforms and the increasing use of social media for spreading polarising, hateful, and even violent messages has exposed significant gaps in the existing legal framework, which is largely reactive and not attuned to the technological complexities of the internet age. The Karnataka Bill seeks to address this void by enacting a comprehensive regulatory structure with preventive, punitive, and remedial mechanisms tailored for the digital space.

One of the key features of the Bill is the expansion of the definition of hate speech to explicitly include online content and digital communications. This encompasses a wide range of digital expressions, including social media posts, forwarded messages, memes, videos, and even private messages that are widely disseminated. The definition goes beyond the traditional legal understanding of hate speech centred on incitement to violence to include content that promotes discrimination, hostility, or marginalization based on religion, caste, race, gender, sexual orientation, or political beliefs.[21] While this broad scope attempts to capture the subtleties of modern online hate, it also raises concerns about vagueness and potential misuse.

The Bill further categorises hate speech as a non-bailable and non-cognizable offence, thereby placing it in the category of serious criminal acts. This means that the accused cannot obtain bail as a matter of right and that the police cannot arrest without prior approval of a magistrate. The purpose of this classification is to reflect the severity of hate speech in causing communal unrest, psychological harm, and social disruption. However, critics argue that this criminalisation framework, especially without a clear statutory threshold for what constitutes hate speech, may stifle dissent and be used as a tool for political vendetta.

A particularly notable and controversial provision of the Bill is its extension of legal liability to digital intermediaries, such as social media platforms and internet service providers. These platforms are required to take down flagged hate content within a specified period (typically 24–36 hours) from the time they are notified by the state authorities. Failure to comply may result in significant fines, civil liability, and even suspension of operations within the state. This move builds upon the framework under the IT Rules, 2021, but imposes stricter obligations, especially in terms of compliance timeframes and penalties. Moreover, the Bill introduces state-level authority for regulating digital content, which raises significant federalism concerns given that the regulation of telecommunication and internet services falls under the Union List of the Constitution.

Perhaps the most contentious element of the Bill is the inclusion of private communications within its scope. The Bill provides that if a private digital message (for example, sent via WhatsApp, Signal, or Telegram) is made public, either through forwarding, screenshotting, or broadcasting, and contains hate content, the originator of the message may be held criminally liable. While the objective is to track and penalize the origin of viral hate campaigns that often begin in closed groups, this provision has raised serious privacy concerns, particularly in the wake of the Supreme Court’s landmark recognition of the right to privacy as a fundamental right in Justice K. S. Puttaswamy v. Union of India (2017).[22] This provision may potentially lead to increased surveillance or backdoor traceability demands from encrypted platforms, raising alarms about state intrusion into private life.

The Karnataka Hate Speech Bill, 2025 represents an assertive and ambitious attempt to modernize the legal regulation of hate speech in the digital age. However, it also opens up a complex debate about constitutional limits on state power, freedom of speech, privacy, and federal distribution of legislative competence. While supporters view it as a necessary corrective to digital impunity, critics warn that the bill risks becoming a tool for censorship and control, unless accompanied by robust procedural safeguards and judicial oversight.

Conclusion and Way Forward

In a digital democracy like India, the challenge of regulating online hate speech demands a delicate equilibrium between constitutional freedoms and collective safety. The Karnataka Hate Speech Regulation and Accountability Bill, 2025, judicial interpretations, and the evolving role of intermediaries collectively reflect the growing recognition that digital hate speech has real-world consequences from communal riots to targeted violence and widespread social anxiety.

Going forward, India must adopt a multi-pronged strategy that includes legislative, judicial, technological, and civic components. Legislation should be precise, objective, and proportionate, avoiding vague standards that risk abuse. Procedural safeguards such as judicial review, notice to the accused, and rights to appeal must be built into all enforcement mechanisms.[23] Digital platforms must be held accountable through enforceable obligations for transparency, timely response, and fair grievance redressal. Simultaneously, there is a pressing need to invest in digital literacy, empower users to engage in counter-speech, and promote awareness about the legal consequences of hate content.

Ultimately, regulating hate speech must not become a pretext for censorship. It should function as a constitutional mechanism to safeguard dignity, equality, fratrnity, and public peace, ensuring that India’s digital public sphere remains inclusive, pluralistic, and democratic.

Reference

[1] Lakshmi P. Nath et al., Online Hate Speech in India: Legal Reforms and Social Impact on Social Media Platforms (Feb. 2, 2024), https://ssrn.com/abstract=4732818.

[2] Ministry of Electronics & Information Technology, Gov’t of India, Review of Legislations on Online Content Regulation in the World, IMEITY/2018/03505 (May 2024), 

https://www.meity.gov.in/static/uploads/2024/05/Internship-Report-Review-of-Legislations.pdf.

[3] https://rm.coe.int/handbook-freedom-of-expression-eng/1680732814

[4] Constitution of India art. 19(1)(a)

[5] Constitution of India art. 19(2)

[6] Amit Jani v. Union of India, W.P. (Crl.) No. 222/2022, Order at para 12 (S.C. Mar. 15, 2024) (India).

[7] Shreya Singhal v. Union of India, (2015) 5 SCC 1

[8] Stifling Dissent: The Criminalization of Peaceful Expression in India | HRW

[9] Indian Penal Code, No. 45 of 1860, § 153A, Acts of Parliament, 1860 (India).

[10] Id. §295A

[11] Id. §505

[12] Navya Gupta & Anil Kumar, Law Affecting Freedom of Speech, International Journal of Advanced Research in Science, Communication and Technology, Vol. 5, Issue 2, Feb. 2025https://ijarsct.co.in/Paper23301.pdf.

[13]  Law Commission of India, Report No. 267, Hate Speech (Mar. 2017).

[14] Supra Note 7

[15] Supra Note 8

[16] Supra Note 9

[17] Information Technology Act, 2000, § 69A

[18] Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules,

[19] Information Technology Act, 2000, § 79

[20] Karnataka Hate Speech and Hate Crimes (Prevention and Control) Bill, 2025 (India) (pending).

[21] Karnataka hate speech Bill to penalise digital platforms for online violations, proposes 3-year jail and Rs 5,000 fine for offenders – Karnataka News | India Today

[22] Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India & Ors., (2017) 10 SCC 1 (India).

[23] Hate Speech Can’t Be Wrongly Seen as Fundamental Right: SC, VISION IAS (May 6, 2025), https://visionias.in/current-affairs/upsc-daily-news-summary/article/2025-05-06/the-economic-times/polity-and-governance/hate-speech-cant-be-wrongly-seen-as-fundamental-right-sc

Written by:

Jashandeep Kaur
Fifth year law student, Rajiv Gandhi National University of Law, Patiala

Related articles