Research Article

Topics: Political Process

Global Governance of AI in the Military Domain

The AutoNorms team has submitted the short paper below to the United Nations Office of the Secretary General’s Envoy on Technology. In preparation for the first meeting of the Multi-stakeholder High-level Advisory Body on AI, the Office issued a call for papers on global AI governance. AutoNorms’ submission, written by Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, and Tom F.A. Watts, touches upon the issue of global governance of AI technologies in the military domain.

1. Why we need global governance of AI in the military domain

Autonomous/AI technologies in weapon systems have already transformed human decision-making in warfare. In the ongoing Russian invasion of Ukraine, for example, we have seen both sides of the conflict use one-way attack drones that have the latent technical capability to identify, track, select and strike targets autonomously—that is without further human intervention once activated. These platforms are examples of a growing number of weapon systems incorporating autonomous/AI technologies in targeting which also includes air defence systems.

Our research as part of the AutoNorms project has examined the trajectories of such existing weapon systems incorporating autonomous/AI technologies, how these trajectories have changed over time, and some of the major consequences of these changes. In particular, we have investigated the consequences of these developments and use trajectories for human decision-making in warfare.

Although often discussed in the context of lethal autonomous weapon systems (LAWS), existing weapon systems such as one-way attack drones are not fully autonomous. There is always some human decision-making involved, for example in the form of humans authorising attacks, validating data inputs, and/or verifying the outcome. But the quality of control that human operators can exercise often appears to be compromised due to the complexity of the tasks human operators need to perform, the skillsets required, and the demands they are placed under, for example in terms of speed and overseeing multiple, networked systems. In other words, we see an emerging social norm, “an understanding of appropriateness”, of diminished human control taking shape through how weapon systems incorporating autonomous/AI technologies have been designed and used.

This emerging norm accepts a diminished, reduced form of human control when interacting with autonomous/AI technologies as ‘normal’ and ‘appropriate’. This norm is a societal challenge, a security threat, and public policy problem because it undercuts the exercise of human agency over the use of force. These developments raise the fundamental ethical question about whether life and death decisions should be ‘delegated’ to machines, who gets to decide how these machines operate, and who is responsible for the outcomes of battlefield decisions. They also create many political and legal uncertainties: to what extent can humans understand how AI technologies ‘make decisions’ about targeting, how can they be certain that these ‘decisions’ are reliable, and who would be accountable for mitigating detrimental features or system failures?

As more autonomous/AI technologies become integrated into diverse weapon platforms and these platforms proliferate globally, the norm of diminished human control risks spreading silently. In the absence of international legally binding regulations, such a norm emerges problematically from sites outside of the public eye and the UN’s scrutiny.

 

2. The governance gap

The diminishing quality of human control results from a governance gap of AI in the military domain. There is currently no specific legally binding international regulations of AI technologies in the military domain and such regulations are not on the horizon in the short term. The main international forum where the global debate has been taking place since 2016—the Group of Governmental Experts (GGE) on LAWS within the framework of the UN Convention on Certain Conventional Weapons—has so far shown limited progress towards agreeing international legal norms.

As we demonstrate in our research, the practices of states such as China, the Russian Federation, and the United States have adversely affected the GGE’s regulatory potential. States have different perceptions of technologies, different interests, as well as different visions of measures to be taken at the global level. Given that the GGE is a forum run by consensus, and considering current geopolitical tensions as well as the view promoted by some states that the development of these technologies should not be stigmatized, finding substantial agreement within this format remains challenging.

Gradually, the discussion seems to be moving beyond applications of AI and autonomy in weapon systems towards a broader consideration of AI and autonomy in the military domain. For example, the framework of ‘Responsible AI’ in the military domain (REAIM) features increasingly more often in the discourse of both states and non-state actors. In February 2023, the first REAIM Summit was held in The Hague, solidifying the Responsible AI concept. The event can be commended for bringing together different actors from various sectors (industry, academia, civil society, etc.) and for broadening the debate beyond the state level. It has resulted in a Call to Action which is not a legally binding document.

However, the concept of Responsible AI remains ambiguous and ill-defined, with actors demonstrating different understandings of this term. Accordingly, it allows for different interpretations of policy responses, leading many actors to advocate for voluntary unilateral measures such as political declarations, sets of standards, ethical principles, or codes of conduct, rather than multilateral governance frameworks. Such measures are good first steps in the discussion on responsible uses of AI in the military. However, they should be complementary to rather than substitutes for international legally binding regulations which would codify an appropriate level of human control over the use of force.

Some regional initiatives have also been launched in this area. Notably, a conference organised by Costa Rica resulted in the Belén Communiqué, where 33 states from Latin America and the Caribbean called for a legally binding instrument with prohibitions and regulations of autonomy in weapon systems. Similarly, the Caribbean Community issued a declaration urging the adoption of an international legally binding instrument on LAWS.

Global governance of military AI and LAWS is a challenging task, but it is not impossible, as demonstrated by regulations and prohibitions of other weapon technologies, such as anti-personnel landmines and cluster munitions.

 

3. The way forward

We suggest a two-fold way forward to secure governance of AI and autonomous technologies in the military domain.

First, and ultimately, there is a need for legally binding international regulations to govern autonomous/AI technologies in weapon systems. In the absence of progress at the GGE, states could move the discussion and negotiation of such an instrument to the UN General Assembly (GA). This move has at least two advantages: first, in contrast to the CCW, the GA brings together the entire UN membership, thereby enabling the full range of UN member states a voice in discussing and negotiating modes to govern autonomous/AI technologies in the military domain. Second, the UNGA is not bound by consensus rules, allowing for substantive progress to be made even in the absence of universal agreement. While UNGA Resolutions are not legally binding, such a negotiated set of standards can still be an important springboard for normative progress in the sphere of military applications of AI. It could potentially lead to the negotiation of a treaty, similar to the path of the Treaty on the Prohibition of Nuclear Weapons.

The ‘two-tiered’ approach that has gained significant support among states parties at the GGE is the most promising way forward towards legally binding international norms.

  1. First, autonomous weapon systems that apply force without human control, supervision, and assessment should be prohibited.
  2. Second, systems integrating autonomous and AI technologies in targeting should be regulated via safeguards such as temporal and spatial restrictions, limits to situations of use, and transparency.

 

Second, a top-down legal process towards securing the governance of AI technologies in the military domain should be accompanied by an international standard-setting process that intends to change practices that actors perform when designing and using weapon systems integrating autonomous/AI technologies from the bottom up. This could take the form of a list of operational standards developed by an international, interdisciplinary group of experts to sustain and strengthen human control in warfare under the auspices of an international standards association such as the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) or the International Organization for Standardization (ISO). There is already work under way that could sustain such a direction, notably conducted by the IEEE SA Research Group on AI and Autonomy for Defense Systems, which the Principal Investigator of AutoNorms co-chairs. A standard that is created by way of a bottom-up mechanism is necessary to bridge the gap on the way towards top-down regulation, and even once top-down regulation is there.

Featured image credit: Anna Nadibaidze

Share this article

Related articles