Research Article

Topics: All, Political Process

Submission on Autonomous Weapon Systems to the UN Secretary General

The following is the AutoNorms project’s response to Resolution 78/241 on “Lethal autonomous weapons systems” adopted by the United Nations (UN) General Assembly on 22 December 2023. The resolution requests the UN Secretary-General to seek views, including those of Member States, civil society, and the scientific community, on “lethal autonomous weapons systems, inter alia, on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force”. The AutoNorms team welcomes the opportunity for representatives of academia to submit their views on this important and timely topic.

Practices shaping norms on human control in the use of force

The AutoNorms project has examined the trajectories of weapon systems incorporating automated, autonomous and AI technologies, such as air defence systems and loitering munitions. In particular, AutoNorms has investigated the consequences of the development and use of these systems for human decision-making in warfare. Although often discussed in the context of autonomous weapons systems (AWS), existing systems are typically not used “fully autonomously” given that there is always some human decision-making involved along such systems’ lifecycles, for example in the form of humans authorizing attacks.

However, our research demonstrates that the quality of control that human operators can exercise may be compromised due to the complexity of the tasks human operators need to perform and the demands placed on them, such as speed and overseeing many networked systems. Therefore, human control in specific use of force situations may de facto become ‘meaningless’.[1] Autonomous and AI technologies in weapon systems have therefore already transformed human decision-making in warfare. An emerging social norm, or an ‘understanding of appropriateness’,[2] of diminished human control is taking shape through how weapon systems incorporating autonomous and AI technologies have been designed and used. This emerging norm accepts a diminished, reduced quality and form of human control as ‘normal’ and ‘appropriate’.[3]

 

The governance gap

As autonomous and AI technologies become increasingly integrated into diverse weapon platforms and as these platforms proliferate globally, the norm of diminished human control risks spreading silently. In the absence of specific international legally binding regulations, such a norm emerges problematically from sites outside of the public eye and the scrutiny of the public debate at the UN.[4]

This emerging norm is a societal challenge, a security threat, and public policy problem because it undercuts the exercise of human agency over the use of force. These developments raise the fundamental ethical question about whether life and death decisions should be ‘delegated’ to machines. They are also associated with many political and legal ambiguities, including in relation to key principles of international humanitarian law and international human rights law.

The diminishing quality of human control in the use of force results from a governance gap of AWS and AI in the military domain more broadly.[5] The main international forum where the global debate has been taking place since 2016—the Group of Governmental Experts (GGE) on LAWS within the framework of the UN Convention on Certain Conventional Weapons (CCW)—has so far shown limited progress towards agreeing on international legal norms.[6]

As we demonstrate in our research, some states’ practices have adversely affected the GGE’s regulatory potential. This is mostly because present practices that states perform in relation to designing and using weapon systems integrating autonomous and AI technologies are not critically examined at the GGE and other governance forums to a sufficient extent. States also have different perceptions of AI and autonomous technologies, different interests, as well as different visions of measures to be taken at the global level.[7] Given that the GGE is a forum run by consensus, and considering current geopolitical tensions as well as the view promoted by some states that the development of these technologies should not be stigmatized, finding substantial agreement within this format remains challenging.

Practices of designing, training personnel for, and using weapon systems integrating autonomous and AI technologies have the potential to undermine the public processes and international efforts of legal norm-making at the CCW and beyond. There is therefore a need for more active efforts in global governance of AWS, including via binding legal norms.

 

Way forward

We suggest a two-fold way forward to secure governance of autonomous and AI technologies in weapon systems, broadly defined.

First, there is a need for legally binding international regulations to govern autonomous and AI technologies in weapon systems. In case of continued absence of progress at the GGE on LAWS, states could move the discussion and negotiation of such an instrument to the UN General Assembly (GA).

This move has at least two potential advantages: first, in contrast to the CCW, the GA brings together the entire UN membership, thereby enabling the full range of UN member states to have a voice in discussing and negotiating modes to govern autonomous and AI technologies in weapon systems and the military domain.

Second, the GA is not bound by consensus rules, allowing for substantive progress to be made even in the absence of universal agreement. While UNGA Resolutions are not legally binding, such a negotiated set of standards can still be an important springboard for normative progress in the sphere of military applications of AI.[8]

We argue that the ‘two-tier’ approach that has gained significant support among many states parties to the CCW is the most promising way forward towards legally binding international norms on AWS.

  1. First, AWS that apply force without human control, supervision, and assessment—and systems that are unpredictable—should be prohibited. A prohibition should also apply to systems designed or used to apply force to persons directly.
  2. Second, systems integrating autonomous and AI technologies in targeting should be subject to strict regulations via safeguards such as temporal and spatial restrictions, limits to situations of use, documentation, and transparency, as well as ensuring an appropriate quality of human agency over use-of-force decision-making.

 

Second, a top-down legal process towards securing the governance of AI and autonomous technologies in weapon systems should be accompanied by an international standard-setting process that intends to change practices that actors perform when designing and using weapon systems integrating AI and autonomous technologies from the bottom up.

This could take the form of a list of operational standards developed by a diverse geographical group of experts across ethical, technical, legal, and policy backgrounds to sustain and strengthen human control over the use of force under the auspices of an international standards association such as the Institute of Electrical and Electronics Engineers (IEEE) Standards Association (IEEE SA) or the International Organization for Standardization (ISO). There is already work under way that could sustain such a direction, notably conducted by the IEEE SA Research Group on AI and Autonomy for Defence Systems,[9] which the Principal Investigator of AutoNorms co-chairs. A standard that is created by way of a bottom-up mechanism is a necessary step to bridge the gap on the way towards top-down regulation. Even once top-down, legally binding regulation is there, such a standard could advance its implementation.

 

Bibliography

[1] Ingvild Bode and Tom Watts, “Meaning-Less Human Control. The Consequences of Automation and Autonomy in Air Defence Systems” (Oxford and Odense: Drone Wars UK & Center for War Studies, February 2021), https://dronewars.net/2021/02/19/meaning-less-human-control-lessons-from-air-defence-systems-for-lethal-autonomous-weapons/; Ingvild Bode and Tom Watts, “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” (Odense and London: Center for War Studies & Royal Holloway Centre for International Security, May 2023), https://www.autonorms.eu/loitering-munitions-and-unpredictability-autonomy-in-weapon-systems-and-challenges-to-human-control/.

[2] Ingvild Bode and Hendrik Huelss, “Autonomous Weapons Systems and Changing Norms in International Relations,” Review of International Studies 44, no. 3 (2018): 393–413, https://doi.org/10.1017/S0260210517000614.

[3] Ingvild Bode, “Practice-Based and Public-Deliberative Normativity: Retaining Human Control over the Use of Force,” European Journal of International Relations 29, no. 4 (2023): 990–1016, https://doi.org/10.1177/13540661231163392.

[4] The AutoNorms Project, “AutoNorms at the UN GGE on LAWS in March 2023,” March 9, 2023, https://www.autonorms.eu/autonorms-at-the-un-gge-on-laws-in-march-2023/; The AutoNorms Project, “AutoNorms at the UN GGE on LAWS,” September 26, 2022, https://www.autonorms.eu/autonorms-at-the-un-gge-on-laws/.

[5] Ingvild Bode and Tom Watts, “Loitering Munitions: Flagging an Urgent Need for Legally Binding Rules for Autonomy in Weapon Systems,” ICRC Humanitarian Law & Policy Blog, June 29, 2023, https://blogs.icrc.org/law-and-policy/2023/06/29/loitering-munitions-legally-binding-rules-autonomy-weapon-systems/.

[6] Anna Nadibaidze, “Regulation and Prohibition of Autonomous Weapons Systems: A Future Outside the CCW?,” The AutoNorms Blog, November 3, 2022, https://www.autonorms.eu/regulation-and-prohibition-of-autonomous-weapons-systems-a-future-outside-the-ccw/.

[7] Ingvild Bode et al., “Prospects for the Global Governance of Autonomous Weapons: Comparing Chinese, Russian, and US Practices,” Ethics and Information Technology 25, no. 1 (2023): 5, https://doi.org/10.1007/s10676-023-09678-x.

[8] Ingvild Bode and Hendrik Huelss, Autonomous Weapons Systems and International Norms (Montreal: McGill-Queen’s University Press, 2022).

[9] See https://standards.ieee.org/industry-connections/autonomy-ai-defense/.

Featured image credit: Anna Nadibaidze

Share this article

Related articles

All

The Imaginaries of Human-Robot Relationships in Chinese Popular Culture

The portrayals of artificial intelligence (AI) and human-robot interactions in popular culture, along with their potential impact on public perceptions of AI and the regulations governing this evolving field, have garnered growing interest. Building on previous studies on public imaginaries of AI in Hollywood movies, particularly focusing on the Terminator

Read More »