This interview with Frédéric Mégret—Professor of International Law at McGill University and a leading scholar on the relationship between law, violence, and armed conflict—was conducted at a moment when autonomous weapons systems and artificial intelligence have moved from speculative concern to operational reality. As AI-enabled targeting systems proliferate across battlefields from Ukraine to Gaza, and as debates over Lethal Autonomous Weapons Systems (LAWS) stall at the United Nations Convention on Certain Conventional Weapons, questions of agency, responsibility, and the very meaning of violence have been thrust into urgent normative crisis.
Experiencing these transformations not as distant technological abstractions but as vectors reshaping the lived conditions of war prompted a renewed inquiry into the moral and legal architecture through which violence is authorized, delegated, and rendered invisible. Against this background, the conversation explores how international law—long complicit in both restraining and reproducing violence—confronts an unprecedented challenge: the delegation of lethal decision-making to entities that act without consciousness, intentionality, or moral accountability.
Drawing on Mégret’s sustained engagement with the foundational violence embedded in legal structures, the interview probes the erosion of responsibility when killing becomes optimized by code, the transformation of the ethical landscape when violence is sanitized through technological mediation, and the structural inequalities reproduced through unequal access to military AI. It asks whether existing concepts of agency, command responsibility, and civilian harm remain intelligible when the human is systematically removed from the loop—and whether the promise of precision and efficiency masks a deeper normalization of limitless violence.
Rather than treating AI in warfare as a purely technical or regulatory problem, the discussion situates it at the intersection of law, ethics and power—asking whether international law can meaningfully constrain violence when the decision to kill no longer requires human deliberation, and whether the outsourcing of moral choice to algorithms risks making war not only more efficient but existentially exorbitant. This interview is offered not as a technological assessment, but as a critical attempt to understand what is lost when violence is delegated to machines.
With this in mind, JURIST contributor AmirAli Maleki—writing from Iran amid intensifying regional conflicts increasingly shaped by automated systems—approached Frédéric Mégret to explore the philosophical and legal questions that emerge when war becomes algorithmic.
AmirAli Maleki: In your work, you argue that international law both restrains and reproduces violence. How does the emergence of AI systems challenge this dual role?
Frédéric Mégret: The relationship between law and violence is a given but it is remarkable the extent to which it is neglected. The law is always rich with whatever foundational violence it has entrenched, but it renders that violence invisible. This is never as true as when it comes to the area of international law that purports to regulate armed violence that is, the most visible and spectacular form of physical violence. We have reason to be wary of the extent to which AI might be a game changer and so at the outset it is good to think in terms of these broad continuities, continuities sustained by a global balance of force, ideological conformity, or the pressures of humanitarian rescue. The question is what does AI bring to this panorama. I would suggest that AI intervenes late in a process of digitalization and virtualization that has long taken hold of the technological world. One of its bases is the data-ification of the world, its transformation into information. There is some promise in this evidently, including an enhanced ability to understand what surrounds us. But data-ification is also a considerable form of violence in itself. First, it commodifies the human. That data can then be shared across platforms with huge implications for privacy. Privacy was already one of the neglected casualties of war (what privacy is there in a refugee tent?) but it is even more in danger when your digital consumption patterns combined with geolocalized data is not only monitored but scanned, exploited and analyzed. Second, it simplifies the human. That data is always being produced for single purposes that evoke a very one-dimensional understanding of what it means to be human. A victim in war may need nourishment and protection, but they may also be a resister, a citizen, a woman, a child or a man, etc. Third, it abstracts the human, something which the laws of the war themselves had long done by seeing the human in narrow universalist terms. So AI arrives relatively late on that scene, ready to reap the rewards of many revolutions that have occurred during the last decades, but also to accelerate their natural slope, a slope that is often dangerous and, perhaps more tellingly, unknown. What AI does is it increases the possibility that this data will not be just sitting around because there is too much of it to use at any one point but that it will constantly be fed into the war machine. And of course, at some point the danger is that AI will be used for decision making, first to assist but eventually as a substitute.
Maleki: Do you think the concept of “agency” in international law is sufficiently equipped to handle entities that act without consciousness or intentionality, such as autonomous weapons?
Mégret: Agency is overrated, paradoxically, even for individuals. We have long known this even as the prevailing investment in international criminal law has meant that there has been a sort of obsession with individual responsibility. We have reason to think that individuals are not as agentic – as the theory of individualism would predict – than they are made up to be. But the agency of entire societies is also itself a very problematic concept except when reduced to a very simple fiction such as state responsibility. There have long been calls to pay more attention to systems in our regulatory efforts, but this is easier said than done. The typical move that lawyers promote is one from subjective, mens rea responsibility to objective, strict or absolute responsibility. That is one way of dealing with the problem but it hides many complexities. Should a population be made responsible for what it “did”, but what does it mean for a population to act? What is a population was deeply divided about how society should act, should it be collectively blamed for the decision of a few? Machine systems bring a further complication into that already fraught theorizing. The most promising avenue is to emphasize the degree to which no machine is ever fully autonomous: behind every “autonomous” decision there is an earlier decision to “sign off” a technological development that allows machine to make autonomous decisions. The problem there is that the production of technology is very diffuse, it is a team effort, and then there is its operation. Some systems have become so complicated that those activating them do not know all elements of how they operate; this happens in some plane crashes where it turns out that not even the pilots understood how their machine worked. At some point, you have to wonder if the long term arc of technology is not precisely to make the human “innocent”, providing it with a useful shield against accusations (“the machine did it” or “the machine made me do it”). We need to see through such attempts by insisting that humans be kept in the loop and in particular that a flesh and blood creature sign off before systems are activated that can generate violence.
Maleki: How should international law conceptualize the meaning of violence when the agent executing it is not a human being, especially in relation to jus ad bellum and jus in bello definitions?
Mégret: Violence was traditionally very much understood as something that human beings did to other human beings. The fact that they did so typically in large groups was an important variable but one that did not change the fundamental inter-se dimension of human violence. No one would have thought that the sword in a Medieval battle was guilty of anything. As the ambiguous aphorism goes, “guns don’t kill people, people kill people”. But now a degree of violence is embedded into the weapon. In truth, this has always been true to a degree. For example, dum-dum bullets, or napalm, or of course nuclear weapons are not innocent of the designs they were built for, they will provoke certain harm only if they are used by a human but harm that is inherent to their design. A nuclear reaction in a H bomb is triggered by the launch but it has its inertia. That inertia however is demultiplied when an initial decision leads to a long and self-sustained chain of violence. So responsibility is still responsibility for violence but it is a responsibility for launching certain processes in conditions where one knows that they will produce a certain degree of violence. That is not necessarily illegal under either the jus ad bellum or the jus in bello, but it introduces a lag. If you impose that sort of danger on the world, then you should be responsible for all its foreseeable consequences and maybe more – at the very least.
Maleki: What forms of responsibility—individual, state, command, or structural—are most at risk of erosion when violence is delegated to autonomous systems?
Mégret:One way of thinking of the long-term trajectory of the practice of war, particularly as it relates to issues of responsibility, is as making sure that no one is ever held responsible. Despite the loose commitment to responsibility, it remains very much the exception and not the norm, and this is becoming more so. One way in which responsibility is shunned is by making sure that it is spread widely: if everyone is responsible, then no one is. You can see this in many practices of contemporary warfare where the military makes sure that everyone is “tainted” by harmful practices, but no one decisively more so, so that it becomes difficult to know who is really in charge. A fragmentation of the decision-making process – a sort of extreme form of the division of labor – means that it is perennially difficult to know who pressed the metaphorical trigger because there is no longer a trigger. No decision, only processes, and no individual, only complex systems, including those sustained by machines.
There is a valid criticism that responsibility has become too individualized. For example, unlawful violence in war is increasingly sustained by patterns of popular political support, notably through social media. We cannot understand excesses in war outside a conception of societal racism and a range of other murderous political passions. Making all responsibility structural, however, is the best way of ensuring that no one is ever held responsible. Totalizing narratives of responsibility give up too easily on the necessary task of apportioning responsibility. Remember that these are not necessarily exclusives: both individuals, societies, the state or the world at large can be responsible in meaningful ways for unlawful harm. What will be needed is to push back against these evasive maneuvers to understand in any given case who makes it possible for a machine to engage in autonomous reasoning. Who, in other words, tried to make the machine human or at least quasi-human. And what is the specific responsibility of having abdicated continuous moral and legal decision-making to AI.
Command responsibility is an interesting one here to start us off. One would typically not think of humans “commanding” autonomous weapons, although that may be a useful normative approximation. Consider that even when it comes to commanding humans, the argument is precisely that these humans are themselves autonomous to some degree. Command responsibility emerges to both take into account that autonomy (the superior is not responsible for unpredictable behavior they could not have known of) but also make sure that it creates specific supervisory obligations for the commander (the superior is responsible for what he knew or even should have known). This is atypical and exceptional in terms of legal liability which is typically not for others, but the military realm has long been an exception to ordinary principles. This is typically justified on account of the inherent dangerousness of deploying the military and therefore the need for some compensatory supervision. Think of the old adage “with great power comes great responsibility”. If you have the power to ask human beings to kill others whilst risking their own lives, then you are unleashing a specific form of violence that can quickly get out of hand and thus the need to rise to the occasion. You could probably say the same thing about autonomous weapons. They are inherently dangerous and so there should be a special responsibility of those who deploy them
Maleki: Is delegating violence to AI a continuation of long-standing patterns of delegating lethal authority (e.g., to soldiers, mercenaries, or bureaucracies), or does it represent a fundamentally new phenomenon?
Mégret: This is a very interesting question. There has certainly long been an interest in delegating dirty business to other actors, with a view to insulating oneself, from responsibility but also from unpleasantness. Violent and hegemonic actors are also, weirdly, aware that excess violence harms them too. There are now detailed account of drone operators suffering from PTSD. Creating distance between the decision to engage in violence and its impact is crucial. You might say having a battle-hardened warrior class is one way you do this, as opposed to soft hearted conscripts. Within the warrior class you can also have groups specially tasked with infamous business like the Nazis’ Einsatzgruppen, or the Somali Janjaweed, or Colombia paramilitaries. Russia is involved in an extensive effort to recruit the downtrodden, criminals, the margins of society to engage in pathological levels of violence. Animals have historically also been used sometimes to seemingly break the chain of responsibility.
So I would not say that AI is not particularly new, but it is the continuation of outsourcing to technology which acts as a sort of demultiplier. Just as strategic bombers never see their target, one of the ideas is to avoid face-to-face confrontation with one’s victims to minimize angst and a sense of human responsibility. There is also an effort to make killing clinical at least from the point of view of the killer – humane killing not influenced by human passions but merely the implementation of a piece of software. The refinement with AI is not only the presumably continued distance with those whom one kills, but the fact that even the deliberative process that goes into that killing is automated. Taking out the humans is a way of protecting them but also of loosening all constraints since humans do not have to be directly privy to each and every decision (in fact cannot since they could not possibly compute the right amount of information in so little time).
Maleki: If violence becomes optimized by code rather than deliberated by humans, how does this transform the moral and ethical landscape of war?
Mégret: This is perhaps one of the trickiest questions. One important thing is to not romanticize humans. We also have every reason to think that we are a murderous, unreliable and violent species. There is no shortage of evidence that humans have committed countless war crimes in warfare. Delegating to the machine is dangerous precisely because it is seductive and not implausibly connected to humanitarian dividends. This is the very same debate with self-driving cars. There is a lot of hand wringing about entrusting driving to machines (as if planes which are potentially much more dangerous had not long been entrusted to automatic piloting for most purposes). But machines don’t drink-and-drive, they don’t do road-rage, they don’t have the low impulse control of an 18-year-old driver. So, although there may be accidents with self-driving cars that will garner a lot of attention, the point is there may be overall far fewer accidents with them. In part, the same is true of machines in war. The automated drone is not susceptible to the fog of war, is not enraged that his comrade was killed, or is not a fanatical zealot. But the problem is that killing in war is not quite the same as driving a car. It involves something inherently troublesome, violent and potentially immoral and illegal (precisely not like driving) which may require a different approach. If no one feels the pinch of guilt when pressing the trigger (and even that, as we know, is not much of a pinch), then it may be that violence is perceived as a purely administrative, managerial exercise of streamlining efficiency rather than a momentous decision about human life, one of the most momentous decision that any human can in fact take: the decision about who gets to live or die.
Maleki: Is there a risk that delegating lethal decisions to machines normalizes violence by making it feel more distant, efficient, or sanitized?
Mégret: That is really the point. None of this is accidental. This makes it more likely that AI will be used, because it effectively offloads the costs of violence to its targets who may not even know what hit them and whose voice is typically neglected. Not only are soldiers operating within AI-saturated environments increasingly unlikely to be traumatized, but they are increasingly unlikely to be held to account. This corrodes all of the ways that had been devised in the last couple of centuries to try and bring some modicum of moderation to war. That these processes were only very partially successful does not mean that we may not come to regret a time of relative innocence where humans had to live with the consequences of their acts, which is very much a condition for legal and moral responsibility.
Maleki: Could AI ever be said to “understand” civilian harm in any sense relevant to International Humanitarian Law (IHL) compliance?
Mégret: Anything can be programmed into the machine as a sort of quantum. The machine could understand, if it is fed the right information, what is harm or at least negative consequences. What the machine cannot do is feel the loss of a human. It cannot empathize. Much of our moral responsibility, notwithstanding, is made up not of hard or soft Kantian rules but of the experience of relating to other humans. Such is the Rousseauan ideal in war that, even as one’s states may be adversaries, we as soldiers or civilians on either side are never enemies because the other is my equal. There is something imponderable about the loss of children in Gaza, Ukraine or Somalia that confronts us to our deepest intuitions about loss. AI, by contrast, is almost guaranteed to preserve war making as an ethically deeply exorbitant practice going forward because it expunges some of its costs. It is all the more likely to do so that AI is increasingly likely to think of itself as God, as far superior to mere humans. Having said that, it is also possible that AI in war will produce a revulsion that, in due course, might lead to its regulation. Once the genie is out of the box, though, it will be very hard to put it back.
Maleki: Is a prohibition on fully autonomous weapons normatively desirable, practically achievable, or both—and how do current UN/CCW debates inform this?
Mégret: Some progress has been made towards a treaty, perhaps as early as this year. I note the irony that the acronym most often used is LAWS (Lethal Autonomous Weapons Systems), so that talking of abolishing LAWS through law is at least a coincidentally paradoxical proposition. That aside, the risk is that a familiar pattern will develop where it is the same usual suspects who want to ban LAWS (European, African, Latin American states in particular) and the same who want to maintain them (big powers, etc). There is not much point in a treaty that is ratified by states that have low access to AI on the battlefield and not ratified by the major players. I also wonder whether there may not be narrow cases where LAWS would be preferable to humans – although I realize this is not a popular argument, it is worth stressing, again, that humans are hardly faultless in the waging of war. There is a debate on whether the goal should be abolitionist or reformist, for example finding ways to make sure individuals are in the loop at some point, although not necessarily 100% of the time.
Maleki: How might the widespread adoption of AI weapons shift power dynamics between technologically advanced states and states in the Global South?
Mégret: I should also say that AI is a manifestation of hegemonic power because only certain powers have it. Only certain states currently have the wherewithal to insulate themselves from the extreme violence they inflict. In that respect, AI is and provides a competitive edge and it is precisely for this reason that states who have that edge are unlikely to relinquish it. AI may yet be generalized but it is not like cheap drones, a technology that can easily be replicated and mastered on the battlefield. As with previous technological developments, the most likely outcome is that technological differentials will be exacerbated, reinforcing the asymmetry of war and the unique proclivity of technology-rich powers to fight very devasting wars whilst claiming the full privilege of the laws of war where the technologically challenged side often gets caught having to fight wars the only way it knows or can, which is often visibly ugly.
Maleki: Do you foresee new forms of structural violence emerging from unequal access to military AI technologies, including algorithmic bias in training data?
Mégret: Absolutely. We are already there, think of “signature strikes” which many studies have documented are riddled with prejudices and assumptions and cultural misunderstandings. AI will consistently err towards plausible deniability and inflicting as much violence as possible without incurring liability, all the while protecting “its” humans. AI portends also the possibility of 24/24 violence, as on Ukrainian battlefields, a dystopian scenario in which humans end up being the prey of powerful technological constellations. It is nothing if not relentless.
Maleki: Given the transformative impact of AI on warfare and international security, do you think there is a need for a new international organization, specifically mandated to regulate and restrain the military uses of AI? If so, what should distinguish such an institution from existing frameworks like the UN or the CCW, in terms of scope, enforcement, and normative authority?
Mégret: This is a complex question. I think we need buy in from the military. There needs to be an understanding that whatever enhances your technological might also makes you vulnerable at some point down the road. This is a bit like chemical weapons during the First World War: beware what you ask for and check where the wind is blowing, lest your decision to deploy mustard gas come back to haunt you. Right now, I am not convinced that a new international organization or treaty is necessarily the way to go. We need public opinions to be more involved (“not in our name”) to target AI using states from within, and we also need to connect the debate on AI in warfare to the broader debate we urgently need to have about AI in general if we are to thrive as a species.
Maleki: Finally, what do you see as the future of responsibility in international law when violence becomes increasingly mediated by algorithms? Is a new normative framework required to safeguard human dignity and accountability?
Mégret: There is a risk that we will be distracted by the latest technology and be obsessed with the gimmick rather than the age-old project that humans behind it are trying to implement. AI is funded by corporations and governments, it is part of a technological race that has its own dynamics but, ultimately, it is embedded in late capitalist society, its fascination with techno-fixes, its narrow functional instrumentalism, and its tolerance for almost unlimited violence. Let’s have a conversation about AI if it is an opportunity to have a conversation about “us” and the state of our societies, including the international system. But let’s not have conversations about technology for its own sake. Right now, and perhaps for a little longer, the technology is still ours, and therefore we can theoretically make it what we want it to be. That window may close.
Editor’s note:
Frédéric Mégret is Full Professor and holder of the Hans & Tamar Oppenheimer Chair in Public International Law at McGill University. His research focuses on international criminal justice, international human rights law, international humanitarian law, and the relationship between law and violence. He received an honorary doctorate from the University of Copenhagen in 2022 and was the James S. Carpentier Visiting Professor at Columbia Law School in 2024-25. His work has appeared in leading journals including the European Journal of International Law, the American Journal of International Law, and the Leiden Journal of International Law.
AmirAli Maleki is a researcher specializing in international law and the philosophy of law, and the Editor of PraxisPublication.com. He works in the fields of political philosophy, Islamic philosophy, and hermeneutics.