Research Article

Topics: All, Human - Machine Interaction, Technology, Weapons System Data

Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control

By Ingvild Bode and Tom Watts

A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called.

Written by Dr. Ingvild Bode and Dr. Tom F.A. Watts, authors of an earlier study of air defence systems published with Drone Wars UK, the “Loitering Munitions and Unpredictability” report examines whether the use of automated, autonomous, and artificial intelligence (AI) technologies as part of the global development, testing, and fielding of loitering munitions since the 1980s has impacted emerging practices and social norms of human control over the use of force. It is commonly assumed that the challenges generated by the weaponization of autonomy will materialize in the near to medium term future.

The report’s central argument is that whilst most existing loitering munitions are operated by a human who authorizes strikes against system-designated targets, the integration of automated and autonomous technologies into these weapons has created worrying precedents deserving of greater public scrutiny.

Loitering munitions—or ‘killer drones’ as they are often popularly known—are expendable uncrewed aircraft which can integrate sensor-based analysis to hover over, detect and explode into targets. These weapons are very important technologies within the international regulatory debates on autonomous weapon systems—a set of technologies defined by Article 36 as weapons “where force is applied automatically on the basis of a sensor-based targeting system”.

The earliest loitering munitions such as the Israel Aerospace Industries Harpy are widely considered as being examples of weapons capable of automatically applying force via sensor-based targeting without human intervention. A March 2021 report authored by a UN Panel of Experts on Libya suggests that Kargu-2 loitering munitions manufactured by the Turkish defence company STM may have been “programmed to attack targets without requiring data connectivity between the operator and the munition”. According to research published by Daniel Gettinger, the number of states producing these weapons more than doubled from fewer than 10 in 2017 to almost 24 by mid-2022. The sizeable role which loitering munitions have played in the ongoing fighting between Russia and Ukraine further underscores the timeliness of this report, having raised debates on whether so-called “killer robots are the future of war?”.

Most manufacturers of these weapons characterize loitering munitions as “human in the loop” systems. The operators of these systems are required to authorize strikes against system-designated targets. The findings of this report, however, suggest that the global trend toward increasing autonomy in targeting has already affected the quality and form of control over the use of force that humans can exercise over specific targeting decisions. Loitering munitions can use automated, autonomous, and to a limited extent, AI technologies to identify, track, and select targets. Some manufacturers also allude to the potential capacity of the systems to attack targets without human intervention. This suggests that human operators of loitering munitions may not always retain an ability to visually verify targets before attack. This report highlights three principal areas of concern:

  1. Greater uncertainties regarding how human agents exert control over specific targeting decisions.
  2. The use of loitering munitions as anti-personnel weapons and in populated areas.
  3. Potential indiscriminate and wide area effects associated with the fielding of loitering munitions.

This report’s analysis is drawn from two sources of data: first, a new qualitative data catalogue which compiles the available open-source information about the technical details, development history, and use of autonomy and automation in a global sample of 24 loitering munitions; and second, an in-depth study of how such systems have been used in three recent conflicts—the Libyan Civil War (2014-2020), the 2020 Nagorno-Karabakh War, and the War in Ukraine (2022-).

Based on their findings, the authors urge the various stakeholder groups participating in the debates at the United Nations Convention on Certain Conventional Weapons Group of Governmental Experts and elsewhere to develop and adopt legally binding international rules on autonomy in weapon systems, including loitering munitions as a category therein.

It is recommended that states:

  1. Affirm, retain, and strengthen the current standard of real-time, direct human assessment of, and control over, specific targeting decisions when using loitering munitions and other weapons integrating automated, autonomous, and AI technologies as a firewall for ensuring compliance with legal and ethical norms.
  1. Establish controls over the duration and geographical area within which weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force can operate.
  1. Prohibit the integration of machine learning and other forms of unpredictable AI algorithms into the targeting functions of loitering munitions because of how this may fundamentally alter the predictability, explainability, and accountability of specific targeting decisions and their outcomes.
  1. Establish controls over the types of environments in which sensor-based weapons like loitering munitions that can use automated, autonomous, and AI technologies to identify, select, track, and apply force to targets can operate. Loitering munitions functioning as AWS should not be used in populated areas.
  1. Prohibit the use of certain target profiles for sensor-based weapons which use automated, autonomous, and AI technologies in targeting functions. This should include prohibiting the design, testing, and use of autonomy in weapon systems, including loitering munitions, to “target human beings” as well as limiting the use of such weapons “to objects that are military objectives by nature” (ICRC, 2021: 2.).
  2. Be more forthcoming in releasing technical details relating to the quality of human control exercised in operating loitering munitions in specific targeting decisions. This should include the sharing, where appropriate, of details regarding the level and character of the training that human operators of loitering munitions receive. 

Funding

Research for the report was supported by funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852123, AutoNorms project) and from the Joseph Rowntree Charitable Trust. Tom Watts’ revisions to this report were supported by the funding provided by his Leverhulme Trust Early Career Research Fellowship (ECF-2022-135). We also collaborated with Article 36 in writing the report.

About the authors

Dr Ingvild Bode is Associate Professor at the Center for War Studies, University of Southern Denmark and a Senior Research Fellow at the Conflict Analysis Research Centre, University of Kent. She is the Principal Investigator of the European Research Council-funded “AutoNorms” project, examining how autonomous weapons systems may change international use of force norms. Her research focuses on understanding processes of normative change, especially through studying practices in relation to the use of force, military artificial intelligence, and associated governance demands. More information about Ingvild’s research is available here.

Dr Tom F.A. Watts is a Leverhulme Trust Early Career Researcher based at the Department of Politics, International Relations, and Philosophy at Royal Holloway, University of London. His current project titled “Great Power Competition and Remote Warfare: Change or Continuity in Practice?” (ECF-2022-135) examines the relationship between the use of the strategic practices associated with the concept of remote warfare, the dynamics of change and continuity in contemporary American foreign policy, and autonomy in weapons systems. More information about Tom’s research is available here.

Share this article

Related articles

All

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and

Read More »