KEY ISSUES AND ANALYSIS

Under the Information Technology Act, 2000, an intermediary is not liable for third-party information that it holds or transmits.[6] However, to claim such exemption, it must fulfil due diligence requirements under the Act and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules).[7] These requirements include specifying in service agreements the categories of content that users are not allowed to upload or share, and taking down content on receiving a government or court order. Prohibited content includes material that is obscene, harmful to child, impersonates another person, and threatens public order. The Amendment adds that intermediaries must make “reasonable efforts to cause” users to not create, upload, or share prohibited content. This requires intermediaries to use their own discretion in deciding what content is prohibited and take measures to prevent such content from being hosted on their platforms. There are two issues with this.

First, the Supreme Court (2015), in Shreya Singhal Versus Union of India, has held that under the IT Act, 2000 intermediaries can only disable content upon receiving an order by the Court or appropriate government or its agency.[8] Requiring intermediaries to apply their own minds in deciding what constitutes prohibited content may expand their role from being facilitators of user-generated content to regulating content on their platforms.

Second, the intermediaries are also required to respect users’ fundamental right to speech and expression (Article 19). This implies that they will have to balance between deciding whether content should be removed and if such removal could violate a user’s right to speech. Intermediaries may not be the most appropriate entity to decide whether the removal of any content violates a citizen’s fundamental right, as such questions require judicial capability and are typically decided by Courts.

2023 Draft Amendments

Rules may be going beyond the powers delegated under the Act

The IT Act provides a safe-harbour model for intermediaries. Under the Act, the central government may make Rules specifying: (i) safeguards or procedures to block information for access by the public, and (ii) guidelines to be observed by intermediaries for exemption from liability for third-party information.[9] The draft Amendments: (i) create a new ground of false information for restricting content, (ii) add the definition of online games, (iii) create a new category called online gaming intermediary, (iv) provide for creating a self-regulating body which would register such online gaming intermediaries, and (iv) create a framework to regulate their content.

The Rules may be going beyond the scope of the Act by adding: (i) a new principle of false information (which does not violate any existing law) for intermediary protection, (ii) a new category of intermediaries (online gaming), and (iii) by regulating a new set of online activities. The Act does not provide for regulation of false information, nor does it delegate the power to regulate online gaming to the Executive. The Supreme Court has held that Rules cannot alter the scope, provisions, or principles of the parent Act.[10] , [11] , [12]

Removing false information

The IT Act regulates intermediaries through a safe harbour model. Under this, they are granted protection from liability for any illegal user-generated content, if they fulfil certain obligations. The IT Rules specify intermediary obligations to claim safe harbour. The draft Amendments add that, in order to claim safe harbour intermediaries must remove any content that is identified as false by the fact check unit of the Press Information Bureau, or any other government entity. This raises several issues as discussed below.

Removing information for being false may violate fundamental rights

Removing online content for being false may undermine: (i) citizens’ right to freedom of speech and expression, and (ii) journalists’ right to practise their profession. Under Article 19(1)(a), all citizens have the right to freedom of speech and expression.[13] Article 19(2) provides that this right may be restricted only on grounds of national security, public order, decency or morality, contempt of court, defamation, or incitement to an offence.[14] False information in and of itself is not a constitutional ground for restricting speech. Therefore, an individual has the right to speech that may be false unless it meets these criteria. In 2015, while examining amendments to the IT Act, the Supreme Court struck down section 66A since it restricted free speech beyond the grounds mentioned in Article 19(2). Section 66A prohibited sharing information that is false/offensive, and causes annoyance, danger, insult or injury.8

As per the draft Amendments, news articles, if identified as false (by PIB or any other centrally authorised agency), may be removed from online platforms. This may violate journalists’ right to carry out their profession. Journalists use intermediary platforms for disseminating news and opinion pieces, and circulating news on such platforms may be integral to their profession. The Supreme Court has also held that the freedom of profession through the medium of the internet is constitutionally protected under Article 19(1)(g), subject to reasonable restrictions in public interest.[15] These restrictions may also violate the freedom of press, which is protected under the freedom of speech and expression in Article 19(1)(a). 15 , [16]

Intermediary protection for false information that may not cause harm

Intermediary liability can only arise (and they require a safe harbour) when an offence is committed on their platforms. However, since simply posting false information is not an offence, there may not be a need for intermediary protection in such cases. False information is currently regulated to address specific harms arising out of the spread of such information. For instance, the Indian Penal Code (IPC), 1860 penalises defamation, i.e., making false statements about a person with the intention to ruin their reputation.[17] Section 171G of IPC penalises false statements made concerning the personal character of a candidate, with the intent of affecting the result of an election.[18] The Consumer Protection Act, 2019 prohibits misleading advertisements that make false claims regarding a product, its use, or guarantee.[19]

Removal of content by the executive

To claim safe harbour, any information identified by PIB or any other centrally authorised agency must be removed by an intermediary. It may not be appropriate to empower an executive body to cause removal of content. For example, there may be content that is critical of the government; authorising a government body to direct removal may cause conflict of interest and violate the principle of separation of powers.

There are some instances where the executive may direct the removal of content. Section 69A of the IT Act empowers the central government to direct the blocking of information if it is necessary for certain objectives such as security of the state, public order, or to prevent incitement of offences related to these. The Rules made under this Section authorise the Secretary of the IT Ministry to block access at the recommendation of a Committee of Joint Secretary level officers. [20] , [21] The originator of the information is provided an opportunity of being heard before a blocking order is made. The Supreme Court has upheld the validity of Section 69A since it has a high threshold for allowing the executive to block content. 8

Courts have ruled that only a high-level executive authority may curtail fundamental rights. Under the Aadhaar Act, 2016, a Joint Secretary was empowered to reveal individual information (biometric data or Aadhaar number) in the interest of national security. [22] The Supreme Court struck down the provision and held that a high-ranking officer, preferably along with a judicial officer should be empowered to reveal information of individuals. [23] Consequently, the Act was amended to empower a Secretary-level officer.

Further, under the IT Rules, a user may approach a centrally appointed Grievance Appellate Committee to complain against content removal. This implies that the executive determines what content to remove, and also resolves complaints against such removal. Since the removal of such content has implications for free speech, it may not be appropriate for the Executive to resolve such grievances. Note that if a person is not satisfied with the decision of the Grievance Appellate Committee , they may approach the courts.

International experience with regulating false information

The European Union addresses the issue of false/fake information through voluntary self-regulatory mechanisms. The 2022 Code of Practice on Disinformation, signed by intermediaries, specifies several commitments to counter online disinformation.[24] These include: (i) not funding the dissemination of disinformation, (ii) ensuring transparency in political advertising, and (iii) enhancing cooperation with fact-checkers. The European Democracy Action Plan identifies disinformation as false information that is shared with an intent of causing harm, and that may cause public harm.[25]

Several intermediaries already regulate false or misleading information through voluntary fact-checking. Social media intermediaries create certain ‘community standards’ that users must abide by, to use the service. For instance, Twitter users can provide context on potentially misleading posts through the ‘Community Notes’ feature. Contributors can leave notes on any tweet and if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a tweet. These tweets are not removed. During the run-up to the 2020 Presidential elections in the United States, Twitter updated its Civic Integrity Policy, which labels tweets that make false claims about polling booths, election rigging, ballot tampering, or misrepresenting affiliations.[26] Such content is either prohibited or flagged as misleading, depending on the severity of violating Twitter’s policy. 26 Similarly, Instagram prohibited false claims regarding COVID and its vaccinations, to ensure that misinformation regarding the spread of the disease and the effectiveness of the vaccine was not spread.[27] It also flagged all content related to COVID with a disclaimer to only trust verified medical research.

Regulation of online gaming

Jurisdiction of the Centre to regulate online gaming

The central government seeks to regulate online gaming through the rule making powers given under the IT Act, 2000. 9 While ‘games’ is not included under any List in the Seventh Schedule of the Constitution, ‘sports’ (entry 33) and ‘betting and gambling’ (entry 34) are included in the State List.[28] Communication (entry 31) is listed in the Union List. Online games are played through communication devices on the internet. To determine legislative competence for a subject matter, courts typically use the doctrine of pith and substance.[29] That is, they identify the central purpose of the law and see which of the Lists it falls under.

Based on this doctrine, we see that the centre has competence to regulate only those aspects of online gaming which correspond to communication. However, in 2018, the Law Commission had observed that Parliament has competence to legislate on online betting and gambling as it is played over communication media (such as telephones, wireless, or broadcasting).[30]

The draft Amendments provide for self-regulatory bodies (SRBs) that will be responsible for verifying games and regulating their content. They also delineate principles through which SRBs regulate the content of online games. The question is whether such requirements go beyond regulating the communication aspect of gaming, and hence the legislative competence of the Centre. Further, as discussed on page 3, prescribing these regulations through Rules may be exceeding permitted levels of delegation.

Requiring game users to do a KYC verification may be excessive

The draft Amendments require online gaming intermediaries to inform users of the know-your-customer (KYC) procedure followed to register a user account. Hence, gaming intermediaries must have a KYC procedure. As per Rules under the Prevention of Money Laundering Act, 2002, a KYC verification is carried out to prevent money laundering or financing of terrorism. [31] , [32] It is unclear if playing an online game requires such a high threshold for user identification. Service providers such as ride-sharing apps, content streaming platforms, or physical lotteries also involve financial transactions but do not require a KYC for customer identification. Further, a KYC verification is required to be done by banks or non-banking financial companies. [33] To the extent that online game users are transferring money, a KYC verification would already be covered under the respective bank’s requirement for customer identification.

[5]. Notice for Public Consultation, Ministry of Electronics and Information Technology, January 2, 2023.

[8]. Shreya Singhal vs Union of India, Writ Petition (Criminal) No. 167 Of 2012, Supreme Court of India, March 24, 2015.

[11]. Kerala State Electricity Board vs Indian Aluminium Company , 1976 SCR (1) 552, The Supreme Court of India , September 1, 1975.

[12]. State of Karnataka v Ganesh Kamath, 1983 SCR (2) 665, The Supreme Court of India , March 31, 1983.

[15]. Anuradha Bhasin vs Union of India, Writ Petition (Civil) No. 1031 of 2019 and 1164 of 2019, The Supreme Court of India, January 10, 2020.

[23]. Justice K.S. Puttaswamy vs Union of India and others, Writ Petition (Civil) No. 494 of 2012 and connected matters, The Supreme Court of India, September 26, 2018.

[26]. Civic Integrity Policy, Twitter, October 2021, as accessed on January 26, 2023.

[30]. Report No. 276, Law Commission of India, Legal Framework: Gambling and Sports Betting Including in Cricket in India , July 5, 2018.

DISCLAIMER: This document is being furnished to you for your information. You may choose to reproduce or redistribute this report for non-commercial purposes in part or in full to any other person with due acknowledgement of PRS Legislative Research (“PRS”). The opinions expressed herein are entirely those of the author(s). PRS makes every effort to use reliable and comprehensive information, but PRS does not represent that the contents of the report are accurate or complete. PRS is an independent, not-for-profit group. This document has been prepared without regard to the objectives or opinions of those who may receive it.

Relevant Links

PRS Products

Rules and Regulations Review

External Links