The rise of unmanned aerial vehicles (UAVs), particularly those enhanced by artificial intelligence (AI), has redefined the landscape of modern armed conflict. Once limited to surveillance, UAVs are now critical tools for cross-border military operations and targeted killings. As of 2023, at least 19 states have conducted drone strikes, with many more acquiring the technology. These developments raise profound legal and ethical challenges for states, testing the boundaries of international law, human rights, and the rule of law.
Legal and Ethical Challenges Posed by AI-Enabled Drones on states
AI-enabled drones implicate a host of international legal obligations, especially those under the UN Charter, international humanitarian law (IHL), and international human rights law (IHRL). Legally, one of the foremost issues is the violation of state sovereignty. Many UAV strikes—such as those by the US in Pakistan, Yemen, and Somalia—occur without host state consent, potentially breaching Article 2(4) of the UN Charter, which prohibits the use of force against the territorial integrity of any state.
Although states often justify such actions under Article 51, invoking the right to self-defense, these claims are frequently vague and unsupported by clear evidence of imminent threat or necessity. The lack of proportionality assessments and transparency surrounding targeting decisions exacerbates these concerns, undermining core principles of IHL such as distinction, proportionality, and precaution.
From an ethical standpoint, autonomous weapon systems (AWSs) pose a particularly grave threat. These systems operate without full human oversight and lack the contextual awareness to reliably distinguish between civilians and combatants—especially in conflicts involving irregular forces or indistinct uniforms. This increases the likelihood of unlawful harm and breaches of IHL. Moreover, the absence of intent or moral reasoning in AWSs complicates war crimes accountability, which under IHL requires knowledge or intent. The result is a legal vacuum, where responsibility is dispersed or negated entirely.
EU member states face additional ethical and legal obligations. The EU AI Act—though exempting military uses—emphasizes transparency, human oversight, and risk management, mirroring IHL’s requirements such as Article 36 weapons reviews. However, the lack of binding standards for defense applications exposes the EU to dual-use proliferation risks, undermining its position as a global leader in digital ethics.
The normalization of UAV warfare introduces further concerns. The reduced physical and political cost of deploying drones lowers the threshold for engaging in hostilities, weakening traditional deterrents and eroding the ethical imperative to avoid unnecessary conflict. This “riskless warfare” attracts international criticism and damages a state’s multilateral standing, especially when civilian casualties are unacknowledged or unaddressed.
Another pressing issue is the cybersecurity vulnerability of AI systems. AI-based drones are susceptible to integrity and availability attacks, which can corrupt targeting data and cause unlawful strikes. These risks are compounded if such systems are linked to nuclear command and control networks. When states fail to ensure robust cybersecurity, they are not only legally negligent but also morally irresponsible, particularly when civilian lives are lost.
The socioeconomic consequences of military automation also carry ethical weight. As UAV systems become more autonomous, the need for human operators and analysts diminishes, contributing to job displacement within defense sectors. Without adequate retraining programs, this trend may exacerbate social inequality and alienate affected communities—posing long-term political and moral risks.
Finally, the gap between ethical commitments and operational realities raises doubts about state integrity. While many states, particularly in the EU, claim to champion human rights and ethical governance, the discrepancy between policy and practice in military AI deployment undermines credibility and public trust.
Relationship to the Rule of Law
These legal and ethical challenges expose a broader crisis in the rule of law. Defined by principles of transparency, accountability, and equal application of law, the rule of law demands that even the most powerful states operate within established legal frameworks. Yet, the secrecy of drone programs, unilateral legal interpretations, and inconsistent enforcement of IHL and IHRL norms reflect a troubling erosion of these principles.
The use of AI in warfare should not permit states to circumvent legal review or moral accountability. When drones are used in ways that defy international legal standards—or where human responsibility is absent—legal norms lose their constraining power. The diffusion of accountability through AI systems challenges core doctrines like individual criminal responsibility and state responsibility, pillars of the modern international legal order.
Pathways for Improvement: Legal and Institutional Reform
To restore the rule of law and address the legal and ethical challenges posed by AI-enabled drones, comprehensive reforms are required at both international and regional levels.
At the international level, states invoking self-defense for UAV strikes must submit detailed reports under Article 51, including legal justifications, threat imminence, proportionality assessments, and targeting data. These submissions should be made publicly accessible.
The UN Secretariat should create a public platform modeled on the UN Treaty Series for publishing Article 51 submissions. This would increase transparency and state accountability. A UN Special Rapporteur or Panel on AI and Targeted Killings should be established to review state compliance with IHL and IHRL, submitting regular reports to the General Assembly and Security Council.
The UN Security Council should reform its methods to automatically circulate Article 51 submissions and mandate legal reviews through the Office of Legal Affairs, curbing vague legal claims. UN fact-finding bodies and Commissions of Inquiry should be empowered to investigate AI-enabled UAV strikes. Their findings must carry legal weight in Security Council deliberations.
A new binding protocol under the Convention on Certain Conventional Weapons (CCW) should define autonomous UAVs, restrict targeted killings, mandate human oversight, and require post-strike investigations.
At the EU level, legislation should be enacted banning fully autonomous lethal drones, enforcing necessity and proportionality standards, with regular compliance audits by the European Defense Agency.
A European Military AI Ethics Council comprising legal, technical, and civil society experts should review UAV operations under Common Security and Defense Policy (CSDP). Violations should trigger funding suspensions and arms export bans.
To increase transparency, the EU should publish an annual White Paper on Military AI and UAV Use, and expand the European Parliament’s role in oversight. Civil society watchdogs must be publicly funded to ensure independent legal scrutiny.
Conclusion
AI-enabled drones have outpaced the legal and ethical frameworks meant to govern them. The resulting gaps in accountability, transparency, and moral responsibility undermine the rule of law and strain state legitimacy. To prevent a normative collapse, states must pursue immediate reforms that re-anchor drone warfare within established legal and ethical boundaries. This requires bold institutional innovation, sustained public scrutiny, and an unwavering commitment to human dignity, lawful conduct, and global accountability.
Surya Simran Vudathu is an LLM student specializing in International Commercial Law at the University of Nottingham. She graduated from Amity University, Mumbai, in 2024, earning a dual degree in law and arts.