Listen to article
Getting your Trinity Audio player ready...
|

The digital revolution has changed how we access, share, and consume information. With just a few clicks or taps, people can join global conversations, find a wide range of knowledge, and take part in public discussions like never before. However, this new level of connectivity brings significant problems. The biggest issue is the widespread dissemination of misinformation and disinformation which has become a double-edged sword in digital arena. These issues threaten societal trust, democratic stability, and public health. Therefore, the idea of Digital Platform Sovereignty (DPS)—the ability of a nation to control and protect its digital information ecosystem has emerged as a policy imperative, particularly for countries with large and varied online populations, like India. This article analyses aspects of misinformation and disinformation in the digital age, their broad effects, and the strategies needed to address them.
India’s Regulatory and Policy Landscape
The regulation of misinformation and digital sovereignty has seen new initiatives in India in response to new challenges. Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, digital intermediaries are required to maintain a grievance redressal mechanism, act on certain user complaints within 72 hours, and, in the case of significant messaging platforms, enable the identification of the first originator of unlawful messages, subject to legal orders. The rules aim to strike a balance between platform accountability and freedom of speech and expression, but they have been criticized on the grounds of privacy protection and possible overreach. The new and controversially debated Karnataka Digital Misinformation and Fake News (Prohibition) Bill, 2025, contains rigorous punishments for anyone who is found to be spreading digital misinformation or disrepecting Sanatan symbols or furthering superstition on social media, but also provides for the creation of a state-level “fake news regulatory authority” with wide powers for monitoring and regulating disproportionately large regulations on digital media. While the government insists that the bill intends to “connect the dots” of existing policy, critics feel that the overly wide definitions without any safeguards may stifle legitimate dissent & will be entangled in political misuse of censorship. Notably, companies, as well as social media platforms, may be held liable when fake news is shared on their respective networks, entailing abetment with additional jail terms. While the intention of curbing harm through falsehoods is genuine, such measures need to be weighed against the rights extended by the Constitution for the freedom of speech and expression. Vague or general content blocks cast a chilling effect upon democratic debate and erode public trust. The intervention, therefore, must be narrowly confined, transparent in its operation, and subjected to rigorous judicial scrutiny to preclude its misuse to the detriment of fundamental rights. In contrast, the example of Operation Sindoor is a stark reminder of what such technologies can do in terms of threats posed by digital disinformation. After India bombed terror camps in Pakistan, concerted campaigns using false accounts and bots provoked outrage on social media with doctored images and fabricated narratives from fake news claiming Indian army reverses as well as communal violence. The Indian government, with the help of agencies like PIB Fact Check, stepped in to debunk these falsehoods within hours, underscoring the need for a rapid, technologically advanced, and coordinated response to digital threats.
Global Best Practices: Japan, UK, EU, and Australia
Other countries have implemented structured models for combating digital misinformation and asserting sovereignty. Japan adopted a measured approach with an emphasis on public-private collaboration and media literacy. Rather than opting for stringent legal censorship, it empowers users and platforms to identify misinformation and respond accordingly, particularly during emergencies like natural disasters. The United Kingdom(UK) has meanwhile developed a hybrid regulatory framework, combining regulatory oversight with platform accountability. The Online Safety Act aims to place a “duty of care” upon digital platforms to prevent the spread of harmful content, which includes misinformation, while laying heavy penalties for non-compliance. The UK also invests in multi-sectorial collaboration and media literacy campaigns, seeing misinformation as a societal challenge that requires coordinated governance. On the other side, the European Union(EU) is more recognised for its comprehensive, rights-based approach. The Digital Services Act (DSA) and Code of Practice on Disinformation mandate the larger online platforms to analyze and mitigate systemic risks, provide transparency on content moderation, and cooperate with independent auditors. Regular risk assessments, transparency reports, and independent audits are among the requirements of the DSA, while the Code encourages platforms to disrupt advertising payments made to disinformation purveyors and to bolster fact-checking efforts. Concrete enforcement mechanisms and respect for fundamental rights accompany these measures. Australia holds platforms accountable through the Online Safety Act 2021 giving the eSafety Commissioner powers to require harmful content and misinformation to be removed within 24 hours. Australia also requires digital platforms to disclose information publicly, file regular transparency reports on digital issues, and launch public education campaigns thereon. Australia’s model is seen to be proactive and effective in its rapid enforcement while preserving free speech and media diversity.
Recommendations for India: A Strategic, Integrated Approach
India stands at a critical juncture in combating digital misinformation. The most resilient digital societies are those that treat misinformation governance and digital sovereignty as foundational design principles rather than as afterthoughts or reactionary policy measures. India, therefore, must not restrict itself to reacting to specific incidents, such as those that occurred during Operation Sindoor, but instead set a future-ready framework that strengthens democratic principles while nurturing innovation.
First and foremost, India ought to develop its digital infrastructure domestically and pour funds into it. This would help set the country free from foreign platforms and would equally help it enforce all sorts of data localisation norms and laws related to content moderation. India can learn vital lessons from the DSA of the EU, from Australia and their quick takedown provisions under the Online Safety Act, and more from the UK, which holds its platforms to account under the law. These exercise risk assessments on a timely basis, expect regular transparency reports, and carry out independent audits; practices which India would do well to follow ‘as is’ with few adaptations and thereby foster a culture of platform accountability and provide leverage to regulators whenever there is a need to intervene.
Additionally, cross-sector rapid response teams need to be institutionalised in the country, modeled after measures taken during Operation Sindoor and Australia’s 24-hour takedown rule. These teams should ideally be AI-enabled and capable of monitoring, analysing, and responding to emerging misinformation trends in real time, more so when elections or national security events are underway.
Equal attention must be given to empowering the citizen. India should be investing in large-scale media literacy campaigns in various languages, akin to public education drives in Japan and awareness campaigns in Australia. The intent is to give people critical thinking skills to make distinctions between authentic and false information, thereby helping them build resistance to online manipulation. Public-private partnerships stand to aid these efforts by sharing the burden of developing digital hygiene across platforms, government, and civil society.
By actively participating in international forums and aligning with best practices from the UK, EU, Japan, and Australia, India can help shape global standards on digital sovereignty and misinformation, while tailoring solutions to its unique social and technological context. Regulatory frameworks should be clear, rights-based, and innovation-friendly—balancing the need for swift action with the imperative to protect privacy and freedom of expression. Any content-blocking or takedown powers, such as those proposed in Karnataka, must be exercised with transparency and align with constitutional principles to avoid chilling legitimate speech.
Conclusion
India’s path forward lies in integrating technological sovereignty, platform accountability, public empowerment, and international cooperation. By making these principles central to its digital governance, India can not only defend its information ecosystem from hostile actors but also set a benchmark for democratic resilience and digital trust in the Global South.
Add comment