Bridging the Digital Divide: Lessons from the US Take It Down Act in India’s Data Protection Landscape Features
Pexels / Pixabay
Bridging the Digital Divide: Lessons from the US Take It Down Act in India’s Data Protection Landscape

Edited by: Alanah Vargas, JURIST Staff

The Age of Digital Harm

We are living in a time when personal dignity is just a click away from violation. The ease with which deepfakes, intimate photos, and AI-generated information can be produced, disseminated, and used as weapons has made the internet a minefield, especially for marginalised people, women, and children. What was once a domain of connectivity and expression now bears the risks of surveillance, exposure, and humiliation.

The United States’ Take It Down Act of 2025 stands as a bold and timely intervention against the current backdrop. Passed with bipartisan support, the law directly targets the scourge of non-consensual intimate imagery (NCII) and AI-generated sexual content, mandating prompt removal by online platforms—within 48 hours of a complaint from the victim—and imposing heavy penalties for non-compliant parties. Thus, it offers a structured remedy to victims in an era of borderless abuse.

While the world moves toward rights-respecting digital regulation, India’s legal system lags behind. Although the Indian judiciary has consistently upheld the right to privacy as a fundamental right under Article 21, the most cited judgment Justice Puttaswamy v. Union of India remains weak and fragmented despite introducing practical mechanisms to protect digital dignity—particularly against the threats posed by deepfakes, revenge porn, and unauthorised data leaks.

This piece seeks to place the Take It Down Act in comparative perspective. It will evaluate how India is currently addressing similar threats, what constitutional principles demand of us, and how our law and policy framework can evolve to provide both technological safeguards and legal remedies without undermining free expression. The challenge is not just to “take it down” but to build a digital rights regime that anticipates, prevents, and redresses harm before the damage is irreversible.

The Take It Down Act, 2025: A Legislative Milestone

The United States enacted into law the Take It Down Act in May 2025, marking a watershed moment in the global conversation surrounding digital privacy and online harm. Crafted in response to the alarming rise of non-consensual intimate imagery (NCII) and AI-generated deepfake pornography, the Act provides a long-overdue legal remedy for victims, especially minors and women, whose dignity is violated through the unauthorised circulation of intimate or sexually explicit content.

At its core, the Act mandates that online platforms must remove such content within 48 hours of receiving a verified takedown request from the victim. The law further requires platforms to maintain a reporting system for victims, and to track takedown requests transparently and in a timely manner.

The scope of the legislation is forward-looking, covering not only traditionally captured intimate images, but also digitally-altered or synthetically-generated content, such as deepfakes and AI-manipulated pornography. This is a crucial advancement, as AI technologies have made it increasingly easy to fabricate compromising content without the victim’s knowledge or engagement.

Perhaps most critically, the Act represents a shift in the burden of action from the victim to the platform. Where earlier victims were expected to navigate the complex legal web of torts, criminal complaints, and cease-and-desist notices, the new regime mandates proactive responsibility on the part of digital intermediaries to act swiftly and decisively.

While civil liberties groups have raised valid concerns about possible overreach and due process—particularly in cases involving mistaken identity or satire—the law incorporates narrow exceptions and “good faith” protections.

For many democracies grappling with similar issues, the Take It Down Act offers both a blueprint and a challenge: how to build a legal architecture that is technologically aware, constitutionally grounded, and rights-affirming.

Mapping India’s Current Digital Landscape

India’s digital transformation has brought billions of citizens into the online economy, where social media use and digitally connected interactions leave behind a vast data trail. Yet, our legal safeguards have not evolved along with this technological leap. When it comes to protecting citizens—especially women and children—from non-consensual dissemination of intimate images or AI-generated deepfakes, India’s legal toolkit remains piecemeal, reactive, and under-enforced.

Fragmented Legal Provisions

India does not currently have a dedicated law akin to the Take It Down Act. Instead, victims must rely on a patchwork of legal provisions spread across various statutes:

Information Technology Act, 2000:

  • Section 66E: Punishes the violation of privacy through the capturing or transmission of private images.
  • Section 67 & 67A: Penalises publishing or transmitting obscene or sexually explicit content in electronic form.
  • Section 69A: Empowers the government to block public access to content under specified conditions.

Bharatiya Nyay Sanhita, 2023 (BNS):

While these provisions exist, they are often inadequate in addressing rapidly unfolding threats of AI-generated deepfakes, revenge porn, or the viral spread of content on anonymous channels.

The Promise (and Delay) of the DPDPA, 2023

The Digital Personal Data Protection Act, 2023 (DPDPA) represents a significant step toward establishing a comprehensive data governance framework. The law emphasises consent-based data processing, establishes rights like data erasure, and promises the creation of a Data Protection Board.

Where India Lags Behind and What the Constitution Demands

India’s response to digital harms remains reactive and fragmented. Despite a swelling tide of online abuse, ranging from revenge pornography to AI-generated deepfakes, our legal architecture still lacks a clear, victim-centric, and constitutionally-grounded framework to deal with these threats effectively.

The legislative efforts are spread thin across multiple laws. The Information Technology Act, 2000, the BNS, and the DPDPA provide limited and scattered protections, with no single statute addressing the specific harm of non-consensual intimate imagery or digitally fabricated content. These laws also burden victims with navigating multiple legal frameworks to seek justice.

While the DPDPA lays the foundation for a consent-based data governance model, it does not directly criminalise or provide redress for non-consensual intimate content or deepfake pornography. However, the recent release of the Draft DPDP Rules, 2025, marks a pivotal shift. These Rules introduce mechanisms which include:

  • Explicit obligations for data fiduciaries to protect personal data
  • 48-hour prior notice before data erasure
  • Mandated breach notifications to the Data Protection Board and users
  • Stricter provisions regarding children’s data, including a prohibition on tracking and behavioural targeting
  • And enhanced oversight for Significant Data Fiduciaries, requiring data protection impact assessments

While these Rules demonstrate growing policy sophistication, they stop short of tackling image-based abuse or platform liability in cases of intimate content circulation. There exists no mandate for platforms to act within specific timelines in such cases, nor is there a defined mechanism for fast-tracking the takedown of sensitive, manipulated content.

Law enforcement is further hampered by poor cyber crime training, delays in registering complaints, and a general lack of procedural clarity. Victims, particularly women and marginalised individuals, are forced to navigate a maze of legal provisions and often face stigma in the process. Even the takedown mechanisms prescribed under the IT Rules, 2021 lack enforceability and transparency.

Exacerbating these issues, India’s constitutional framework grants lawmakers broader latitude than is allowed under many Western constitutions to enact protective legislation. For example, the United States walked a constitutional tightrope to pass the Take It Down Act, balancing the First Amendment’s free speech protections alongside it. Even so, it created a narrowly tailored exception to address non-consensual and AI-generated intimate content.

India has no such constitutional constraint. Our Constitution guarantees freedom of speech under Article 19(1)(a) but allows for reasonable restrictions under Article 19(2), including in the interests of “decency” and “morality.” The right to privacy and dignity under Article 21—affirmed by the Supreme Court in Puttaswamy v. Union of India—strengthens the moral and legal imperative for the state to act.

However, the burden remains on the victim to report, to prove, and to pursue. This is both unjust and avoidable.

India lags behind not because of a lack of constitutional laws, but because it has failed to translate constitutional promises into digital realities. Protecting individuals from technology-facilitated harm is no longer a matter of future readiness—it is a present necessity. It demands legislative clarity, institutional readiness, and above all, political will.

What India Can Learn: Legal & Policy Recommendations

Indian courts find themselves at a pivotal juncture where digital governance must catch up to the scale and pace of harm occurring in online spaces. The United States’ passage of the Take It Down Act reflects a growing consensus that the burden of protecting dignity in the digital age can no longer fall solely on the victim. Instead, it must be backed by responsive law, strong enforcement, and accountable intermediaries.

The Digital Personal Data Protection Act, 2023, offers a solid foundation in that direction, with the Draft DPDP Rules, 2025 focusing on consent architecture, data breach protocols, and children’s data protection. But the regulatory ecosystem remains ill-equipped to directly address the specific and acute harms of non-consensual intimate imagery and AI-generated deepfakes. These harms operate not just as data violations, but as personal and reputational violence often gendered and irreversible.

India needs a dedicated statutory framework that goes beyond offering broad data protection. Instead, the framework must directly target the harms of image-based sexual abuse and synthetic content dissemination with more specificity. This could take the form of a standalone law or a comprehensive chapter within the IT Act that: (1) clearly defines NCIC and deepfakes; (2) prescribes penalties; (3) outlines digital platforms’ responsibilities; and (4) creates enforcement agencies. Such a law must shift the burden from the victim to the ecosystem, placing enforceable duties on digital platforms to prevent, respond, and remediate.

In the absence of such a law, India must at a bare minimum amend existing IT Rules and DPDP Rules to introduce mandatory takedown timelines, ideally within 24 to 48 hours of a verified complaint. These changes should require platforms to establish user-friendly reporting systems and subject them to liability in cases of wilful delay or negligence. The current draft rules are a start—especially in their obligations to protect children and significant data fiduciaries—but they must grow to accommodate harm-specific categories, such as intimate imagery and morphing-based abuse.

A key step toward meaningful redress lies in swiftly bringing the Data Protection Board into operation, as envisioned under the DPDP Act. This board should not merely function as a regulatory authority for consent violations, but must also adjudicate digital sexual abuse cases, issue urgent takedown orders, impose penalties on intermediaries, and ensure platform compliance. To grant meaningful access to justice, this board should coordinate with cyber crime cells, women’s commissions, and legal aid bodies to protect complainants and reduce justice system fatigue.

India must also invest in institutional capacity. Law enforcement agencies, prosecutors, and magistrates must be trained in the nuances of digital forensic investigation, AI-based image manipulation, and trauma-informed policing. Creating digital redress tribunals or fast-track e-courts could dramatically reduce the delay and stigma faced by survivors. Meanwhile, survivors must be offered structured support—legal, psychological, and procedural, including the right to file complaints anonymously and demand compensation for platform inaction.

As we craft new regulatory models, it is also critical to protect against misuse. Takedown powers must not become tools of censorship or political vendetta. Any framework must uphold the principle of proportionality, ensuring that freedom of speech under Article 19(1)(a) is balanced with the right to dignity and privacy under Article 21. Laws must protect legitimate content—such as satire, investigative journalism, or whistleblowing—while also targeting real harm.

Finally, no legal ecosystem can function in isolation from public consciousness. India must launch sustained, multilingual public awareness campaigns to educate citizens—especially minors and first-generation digital users—about online consent, privacy rights, and reporting tools. Schools, EdTech platforms, and NGOs should partner to build a digitally literate and rights-conscious citizenry.

The Draft DPDP Rules, 2025, mark a moment of regulatory awakening—but they must serve as scaffolding for deeper reform, not as its ceiling. India has an opportunity to create a proactive, rights-affirming, and constitutionally sound digital rights framework. The world’s largest democracy should not simply react to harm, but must work to prevent it.

Conclusion

India is not short of laws. It is short of legislative precision, enforcement speed, and constitutional courage. While the United States’ Take It Down Act signals a strong pivot toward accountability in the digital domain, India’s response remains one of incremental regulation, often one step behind the evolving nature of technological harm.

But the tools for transformation already exist within our constitutional framework. The recognition of the right to privacy by the Supreme Court as a fundamental right was not just a declaration; it was an instruction to policymakers to act. Article 21’s guarantee of dignity and autonomy must now find meaningful articulation in a digital world where harm is instantaneous, borderless, and often permanent.

The Draft DPDP Rules, 2025 show that India is aware of the path forward. Yet awareness is not action. Without codified takedowns timelines, dedicated laws for image-based abuse, and institutional mechanisms for redress, the country risks turning digital dignity into a paper promise.

To move from “take it down” to “build it up,” India must create an ecosystem that doesn’t just erase harm after it happens. Instead, it must prevent harm through design, punish harm through laws, and offer support to victims to help them heal. The law must speak before the trauma becomes irreversible. And when it does speak, it must speak clearly, firmly, and in defence of the most vulnerable.

The digital age will continue to evolve. So must the Republic’s promise. Now is the time to act.

Ayush Kumar is a law graduate of Chanakya National Law University, Patna, Bihar, India and currently works as a Legal Consultant with the Government of Bihar, India.