Research Article

Topics: Human - Machine Interaction, Political Process, Technology

Five Questions We Often Get Asked About AI in Weapon Systems and Our Answers

By Anna Nadibaidze and Ingvild Bode

The ongoing integration of artificial intelligence (AI) and autonomous technologies in weapon systems raises many questions across a variety of fields, including ethics, law, philosophy, and international security. As part of the AutoNorms project, we have contributed to many of these discussions over the past three years, including through presenting our research in academic and policy-oriented contexts. At our presentations, we found that some questions come up much more frequently than others. In this blog post, we highlight five of the main questions we often receive and sketch our typical answers to them.

Some of these questions, which also often take the form of remarks, are based on arguments and assumptions which have featured in the scholarly literature and policy debates for many years. Nevertheless, we feel the need to reiterate some of these points as the public discussion often features problematic narratives and tropes that deserve, in our view, more contestation.

 

1. “Would it not be better for robots to fight robots in wars, in order to save human lives?”

The international version of “Robot Wars” envisioned here is certainly an attractive proposition. After all, it removes humans and vulnerable human lives entirely from the equation of war to be substituted by a series of “robot duels”. Saving human, and especially civilian, lives is an honourable intention considering the ongoing atrocities of warfare around the world. But the vision of machines fighting instead of armies is far removed from the realities of warfare and, more specifically, from how militaries plan to use robotics, autonomous and AI technologies in warfare.

Generally, destruction and human hardship are at the very heart of warfare. If the purpose of warfare and the intentions of armed forces were to protect as many human lives as possible, we would have long seen different ways of fighting or perhaps just less wars. Further, the question also appears to assume a situation of near symmetry: our side has robots, their side has robots, why not just let them fight each other? This symmetry does not exist. Rather, asymmetry in capabilities is the norm. We may therefore see one side using weapon systems integrating autonomous and AI technologies—and the other not. More specifically, militaries plan to use AI technologies for a variety of purposes, including intelligence, reconnaissance, surveillance, data analysis, and supporting targeting decision. What unites all of these applications and scenarios is a vision of human-machine teaming, or ‘manned-unmanned teaming (MUM-T)’, rather than replacing humans with machines fighting between themselves.

 

2. “AI is neutral and less biased than humans. AI would be better at conducting war, unlike humans, who behave immorally and commit atrocities”

We often hear from different sets of actors that integrating autonomous and AI technologies into weapon systems would be ethically desirable because these systems would be more efficient and less biased than humans (e.g., AWS do not follow religious or moral beliefs, they cannot suffer from fatigue, and commit less errors).

Such arguments are based on at least two problematic and contestable assumptions:

First, the salient belief that AI technologies can produce outputs in a neutral or unbiased manner. This originates, probably, in a positivist epistemology that assumes a clear boundary between science and politics. But, as many dysfunctional applications of AI in civilian fields demonstrate, AI cannot be free of human bias. Human bias is inherent to the core processes of AI development, that is how training data is gathered and labelled, how the AI model is trained, and how output is produced. There is so far no evidence that human biases can be fully removed and that AI-based systems could be a neutral tool in warfare. Assumptions and claims about what AI systems are or will be capable of should be treated more carefully.

The second assumption is that AI technologies in warfare can be preferable to humans as such technologies are not subject to human weaknesses and emotions. This view reduces a complex, multidimensional, and deeply social issue such as warfare to a dataset. As Arthur Holland Michel points out, datasets “can only capture historical trends, patterns, phenomena and statistical distributions”. They do not capture ethics, morality, international law and other social aspects which are key in war. In a constantly changing and complex environment such as armed conflict, it is not possible to “program” all potential scenarios into an AI system.

Some voices in the debate want to frame emotions and morality as problematic in war: but they have always been part of military strategy. As prominent AI researcher Stuart Russell argues, human compassion can act as a “check on the killing of civilians”. Militaries themselves recognise the need for “situational awareness” which requires moral, ethical, and legal reflections. Only humans can engage in these reflections. We do not claim that humans always do, considering the atrocities committed around the world. But humans at least in theory have the option to do so, while systems integrating AI technologies do not.

 

3. “Regulation doesn’t matter when military powers do whatever they want. Even if we have international regulation, states like China and Russia will not care and continue developing AI weapons”

Establishing global governance in the area of weapon systems integrating AI and autonomous technologies is indeed a challenging task. The discussion on lethal autonomous weapon systems (LAWS) at the UN Convention on Certain Conventional Weapons (CCW), although it does not have a negotiation mandate, has de facto stalled. States have different definitions of AWS, different interpretations of AI and autonomous technologies in the military, and of course different interests, as the most active developers of these technologies are sceptical of new international regulation. Other options beyond the CCW—or first steps towards it—are possible. But scepticism remains high. Many argue that, even if some states were to agree to regulate military uses of AI and autonomy, the big powers will simply not care and will continue to do whatever they want.

Using a specific group of states’ interests to justify the lack of regulation is problematic at a time when these technologies are evolving and when norms need to be codified in global governance. Regulation matters and, in the case of weapon systems integrating AI and autonomous technologies, is normatively desirable. No disarmament treaty is perfect. International agreements are the result of compromises and cannot satisfy everyone. There was scepticism towards other treaties too, such as the treaty prohibiting nuclear weapons, yet the TPNW was ratified and is a normative step forward despite the nuclear states’ opposition.

Due to the dual-use nature of and ambiguities related to AI technologies, it would be difficult to use the exact same mechanisms, as in the nuclear field for example. However, this is an opportunity to rethink disarmament and arms control models. Eventually, maybe some great powers like the US will find it in their interests to join in the process of international regulation to shape global norms on use of military AI.

 

4. “Fully autonomous weapons, or ‘killer robots’, do not exist yet and are a matter for the future. Why should we care about them now?”

This remark seems to be based on the belief that weapon systems integrating autonomous and AI technologies concern futuristic “killer robots” which gain consciousness and decide to destroy humanity—a prominent narrative in many works of science-fiction, especially in Western movies. Many researchers in this sphere have spent years countering this imaginary of the Terminator and demonstrating that this is far from what is at stake when we talk about AI in weapon systems.

The whole discussion on the ‘existential risks’ posed by artificial general intelligence also rarely features existing security, legal, moral and other risks associated with weaponised and military AI. Militaries have been investing in increasing autonomous capabilities of their weaponry for decades. Some weapon systems could already qualify as fully autonomous—and the Turkish-made loitering munition Kargu-2 has been qualified as such by the UN—although it is difficult to verify whether they’ve been used autonomously. However, the possibility remains. As many researchers point out, the technology for autonomous weapons is already available and does not have much to do with the Terminator, Skynet or sentient machines.  

Current, ongoing practices of increasing levels of autonomy in the military and weapon systems are already problematic and deserve to be part of a public debate, rather than being overshadowed by imaginaries of futuristic “killer robots”.

 

5. “Why does human control in the use of force matter? Whether it is autonomous weapons or humans that kill, the result is the same”

Interestingly, we were asked this type of almost nihilist question from the start, but it appears to be increasing in frequency. Why does it matter whether a person is killed by a human or by a robot?

It matters on various legal and ethical/moral grounds. Our ethical, moral, and legal systems are deeply humanist. Our concepts and institutions of justice are human-centric. They require human addressees. We can see further evidence of this in the strong protective norms that are in place when it comes to taking life, an extreme form of harm that can be caused by using weapon systems integrating autonomous/AI technologies. Ethically, many scholars argue that delegating life-and-death decisions to AI technologies violates human dignity as it dehumanises, turning humans into data points. Machines are not able to recognise and appreciate the value of human life.

Beyond this, safeguarding human control over the use of force is also something of a societal gut reaction. The decision to take life should remain the prerogative of humans. Interestingly, this appears to be a near-universal reaction. While states debating LAWS at the CCW do not agree on much, they do agree on the principle of safeguarding human agency, control, and/or judgement in warfare.

As AI technologies are increasingly diffused across many areas of our social lives, we argue that this will entail setting fundamental guardrails around the types of tasks that we should use such technologies for and those that should be reserved for human judgment. The latter category needs to include taking life—and not just in military settings.

Featured image credit: Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0

Share this article

Related articles