Research Article

Topics: Political Process

AutoNorms at the UN GGE on LAWS in March 2023

The AutoNorms team regularly participates in meetings of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The GGE meetings take place in Geneva and bring together delegations of state parties to the UN Convention on Certain Conventional Weapons (CCW), as well as representatives of civil society, non-governmental organisations, and academia.

On 8 March 2023, Anna Nadibaidze delivered the following verbal statement on behalf of AutoNorms at the First Session of the 2023 GGE on LAWS. 

Thank you very much, Mr. Chair, for giving me the floor.

I am speaking on behalf of AutoNorms, an international research project hosted by the Center for War Studies at the University of Southern Denmark.

Please allow me, Mr. Chair, to make some general remarks regarding the issue of human-machine interaction and human control. More specifically, these remarks concern how the integration of autonomous technologies in targeting already impacts the way that human control over the use of force is exercised.

AutoNorms advocates for the importance of scrutinizing current, existing ways in which states operate weapon systems integrating automated, autonomous, and artificial intelligence (AI) technologies. We argue that some practices in the development, testing, and use of autonomous technologies in weapon systems are already changing the quality of human control, as well as the roles of humans and machines in specific use of force situations.

There are two elements I would like to highlight. They have previously been mentioned by many delegations, but they deserve to be emphasized again as part of the discussion on human-machine interaction.

The first element is complexity.

Increased delegation of tasks to automated and autonomous technologies has made decision-making on the use of force more complex. This complexity means that human operators of weapon systems, such as air defence systems and loitering munitions, have typically become passive supervisors rather than active controllers during a system’s operation. Under rapidly changing conditions, it is possible that humans may uncritically trust the system’s outputs.

Even if a human takes the final targeting decision before authorizing an attack, or for example, if there is a possibility for a human to exercise a time-restricted veto before the attack, there are questions about the quality of this control. This could be due to potential over-trust in the system, guided by the perception that technology is more precise, efficient, and better at decision-making. The increasing complexity of human-machine interaction makes it more difficult to engage in critical deliberations about targeting decisions, especially in a fast-changing environment.

The second element is uncertainty.

There is a lot of uncertainty associated with AI and autonomous technologies, both in the civilian and military sectors, as has already been investigated by many experts and organisations, some of which are present here in this room. AutoNorms’ ongoing research on loitering munitions, which are becoming a prominent feature of modern battlefields and proliferating around the world, suggests that the use of these systems has created unpredictabilities which affect the quality of human control.

There is uncertainty about the role that the human actually plays in decision-making when it comes to loitering munitions integrating autonomous and AI technologies. While these systems are reportedly operated with a human ‘in’ or ‘on’ the loop of control over the use of force, the quality of that control can actually be compromised and de facto diminished. In other words, keeping a human ‘in’ or ‘on’ the loop might not be enough. We have to investigate much more closely how these human operators are trained and under what circumstances the control that they exercise can be meaningful, however you define this term. Human operators must be, for example, able to understand the system’s functionality, and they must have a digital literacy in order to ensure they remain in control of the targeting process.

These issues of complexity and uncertainty have been highlighted by many delegations, and mentioned in many working papers, most notably the working paper from the State of Palestine, which raises the possibility of “nominal human input” in the autonomous process and potential situations where the human input is de facto meaningless.

Mr. Chair and distinguished delegates, the repeated performance of practices in the development, testing, and use of weapon systems integrating autonomous and AI technologies already affects human-machine interaction and shapes the quality of human control in the use of force, leading to the emergence of a certain type of norms away from public deliberations such as the GGE on LAWS.

As more autonomous AI technologies become integrated into diverse weapon platforms and these platforms proliferate globally, a certain norm of diminished human control risks spreading silently. Such norms are, however, not necessarily desirable and, importantly, emerge from sites outside of the public eye.

We agree with calls of stakeholders such as Article 36 to further discuss and scrutinize the nature of human-machine interaction.  

Such examinations could be fruitful for delegations when discussing elements of human-machine interaction that they consider desirable. As it has been mentioned, many delegations have already presented elements such as: an “adequate functional understanding of the system” and “a limit to the duration and geographical area of a system’s functioning” (this is from the Austrian working paper), or operators’ “sufficient understanding of the weapon system’s way of operating” (this is from last year’s working paper from Finland, France, Germany, the Netherlands, Norway, Spain, and Sweden), and also there are some parts of the draft article 6 of the working paper proposal sponsored by the United States and its co-sponsors. Those could be valuable elements of potential legal norms on human control in the use of force.

In the continued absence of international legal norms in the sphere of autonomous weapon systems, essentially a “legal void” or a “normative vacuum”, as the Chair and several delegations have highlighted yesterday, ongoing practices in relation to autonomy in weapon systems should continue to be examined and critically assessed.

It would therefore be helpful for states to be more open and transparent about details relating to the quality of human control exercised in weapon systems integrating autonomous and AI technologies that they develop, test, and use. Such transparency is key to allow for greater scrutiny, but also to build trust and confidence, as well as mitigate risks associated with autonomy in weapon systems.

Thank you, Mr. Chair.

Featured image credit: Berenike Prem

Share this article

Related articles

Human - Machine Interaction

Five Questions We Often Get Asked About AI in Weapon Systems and Our Answers

By Anna Nadibaidze and Ingvild Bode The ongoing integration of artificial intelligence (AI) and autonomous technologies in weapon systems raises many questions across a variety of fields, including ethics, law, philosophy, and international security. As part of the AutoNorms project, we have contributed to many of these discussions over the

Read More »
Human - Machine Interaction

Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control

Download the report here By Ingvild Bode and Tom Watts A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by

Read More »