Research Article

Topics: Political Process

Has REAIM “Re-aimed” AI Applications in the Military Domain?

A positive step towards regulating the development and growing use of artificial intelligence (AI) in warfare was taken during a two-day conference in The Hague in February 2023, namely the Global Summit on Responsible Artificial Intelligence in the Military Domain (REAIM). As an initiative of the Dutch government in partnership with the Republic of Korea, REAIM involved more than 2000 participants from roughly 80 countries, representing governments, academic institutions, militaries, industry and civil society.

At a time when international negotiations at the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) under the United Nations (UN) framework are trapped in an impasse, REAIM provided a timely new venue for both state and non-state stakeholders to further discussions on regulating military uses of AI.

For many experts and observers, AI technologies and their increasing integration in weapons systems, especially in their targeting functions, could transform how wars are waged in an unprecedented and potentially dangerous manner. By bringing together key stakeholders from multiple sectors, this summit gave the opportunity to cross-fertilise ideas and knowledge in an inclusive and open fashion.

As a tangible outcome of such a multistakeholder exchange, the summit put forward the REAIM 2023 Call to Action urging for more immediate efforts to craft norms of responsible behaviour concerning the development, deployment and use of AI in the military domain. This Call to Action has been endorsed by more than 60 countries, including China and the US, the two most important contenders in the field of military AI.

Notwithstanding REAIM’s achievements in adding momentum to conversations in a vital area, a closer look at the Call to Action and some of the foregoing sessions’ discussions still leave much room for improvement in terms of depth, clarity, and more inclusive representation of diverse voices. The following points of critique include, hopefully, helpful suggestions, drafted in an effort to help this unprecedented initiative reach its full potential.

 

A critical review of the REAIM 2023 Call to Action

While useful in establishing the “urgent nature” around developments in AI and autonomous weapons as introduced by Dutch Foreign Minister Wopke Hoekstra, the Call to Action does not substantially go beyond the 11 Guiding Principles on LAWS adopted by the 2019 Meeting of the High Contracting Parties to the UN Convention on Certain Conventional Weapons (CCW)—aside from adding some more emphasis on multistakeholder participation. Like the 2019 Guiding Principles, the 25-point Call to Action represents non-legally binding guidelines that outline common concerns/understandings vis-à-vis military applications of AI and possible actions to promote their responsible use and development. It requires states to use AI “in full accordance with international legal obligations and in a way that does not undermine international security, stability and accountability”. States, accordingly, should “ensur[e] appropriate safeguards and human oversight” over the use of autonomous systems.

Likely to the surprise of many observers, this document phrases the military use of AI in a highly positive tone albeit conveying recognition of associated challenges and risks. For instance, the document contains numerous mentions of the “opportunities” and “benefits” that AI systems can bring to the military domain. These include the following excerpts: “We recognise the potential of AI applications in the military domain for a wide variety of purposes, at the service of humanity, including AI applications to reduce the risk of harm to civilians and civilian objects in armed conflicts” (Point 2) and “we recognise that failure to adopt AI in a timely manner may result in a military disadvantage” (Point 8). Coupled with phrases that criticise human involvement in warfare such as “bearing in mind human limitations due to constraints in time and capacities” (Point 12), the declaration appears to favour increasing the integration of autonomous or AI technologies in weapons systems—as long as it is achieved in a manner that is not seen as threatening to “international security, stability and accountability”.

The appropriate use of AI in the military domain is further softened by adding recognition of varying national implementation procedures “per state” (Point 15). Such language possibly reflects the lowest common denominator of states’ preferences, a common plight in multilateral policy-making. But what are the potential effects of an international document calling for responsible AI regulation including a disproportionate emphasis on the benefits of autonomous and AI technologies in the military domain coupled with a wide room of manoeuvre for states in interpreting appropriate regulatory approaches? Arguably, this kind of phrasing severely undercuts REAIM’s message about coming together and acting now to address “the possible distrust of systems, the issue of human involvement, the lack of clarity regarding responsibility, and the potential unintended consequences”. These common challenges that are raised by AI systems in the military domain are outlined in the preamble of this declaration.

The REAIM Call to Action also does not mention the possibility of developing legally binding rules in the area. This is surprising, given that the Dutch government recently changed its position to supporting a legally binding instrument prohibiting fully autonomous weapons and regulating other types of weapons with autonomy. One can speculate about whether this focus on non-binding guidelines, as well as the sanguine language used to describe military AI may have put off Brazil and other states that have urgently called for the need to negotiate new legally binding rules on military AI from endorsing the document.

Meanwhile, REAIM’s lowest-common-denominator approach was also not agreeable to Israel, which did not sign the statement for fear of any limitations set on using AI in the military domain. Despite its scale, flexibility and good intentions, the REAIM summit has not been able to adequately address the differences between the global North and South when it comes to regulating LAWS, a major cause of the stalemate of the UN GGE process.

The inclusion of many undefined terms further clouds the responsibilities entailed by endorsing the document. “Accountability”, “international legal obligations”, “sufficient research, testing and assurance”, “appropriate safeguards and human oversight”, and “responsible AI” are some examples of the vague terms deployed. Of course, such vague language is often a characteristic of international political declarations of this kind. But to make more substantial progress on regulating this field, much more detail and substance would be needed.

Finally, important questions remain unanswered in the document, many of which are perennial problems in the international debate about military AI: what should count as an “appropriate” quality of human control to ensure that humans remain responsible in the AI decision-making process? How can such technologies be developed and deployed responsibly? What counts as sufficient research, testing and assurance to ensure that the systems work reliably and predictably and that they remain under human control? How can multiple stakeholders work together to enforce the suggested principles? What are the measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation?

 

A display of ideas rather than a catalyzer for problem-solving

From my observations, discussions at the REAIM conference often lingered at the surface level and included platitudes that could not generate meaningful debates about approaches towards responsible uses of military AI. This could be due to the limited time available to each of the vast number of participants and the sensitive nature of the topic. However, what the international community needs to move forward on this important topic is not simply a collection of ideas but more in-depth discussions of solutions that can bring about meaningful changes.

Participants from different disciplinary and professional backgrounds remained divided on key issues related to military AI. For example, one session I attended featured a simulation game run by Gillian van de Boer-Visschedijk of TNO, a Dutch company that focuses on applied science. The military representatives, lawyers, entrepreneurs, government officials, and academics present in the room gave widely different answers to questions on issues such as efficiency, privacy, and human control in a conflict scenario where autonomous weapons can be applied.

What struck me the most was the consistent prioritisation of efficiency, security, and rules of engagement over other ethical considerations by military representatives, in sharp contrast to what is seen as appropriate by most academics in the field. However, the well-designed game and the accompanying interaction activities only worked to reveal the stakeholders’ diverse views. What was regrettably missing was a fiery debate about contentious views of right and wrong, which may have sparked reflections and new ideas on what to prioritise and what to consider when integrating AI technologies in the military domain.

 

We are in this together

REAIM’s efforts to start a global dialogue on responsible AI in the miliary domain have also not been open and inclusive enough. Discussions have largely remained Western-centric, deprived of contributions from the Global South. Notably, more than 30 Latin American and Caribbean States gathered for a similar state-level conference in Costa Rica to discuss the social and humanitarian impact of autonomous weapons in the week immediately following REAIM. Many states from this group have repeatedly called for the immediate negotiation of new international law to stipulate binding rules on autonomous weapons.

There is clearly a need for the active inclusion of Global South actors so that the “victim” position in AI weaponization processes—likely as a primary testing ground and battlefield for AI-based weapons—can be heard and represented. While understandably not invited to attend the summit following its full-scale invasion of Ukraine, Russia and its practices with respect to weaponizing AI should still be analysed and properly understood based on the information available.

Meanwhile, the call to deepen technological, political and security engagement is frequently addressed by some participants to only include democratic states. This was particularly obvious in statements delivered by government representatives, highlighting a “contrast” between national approaches instead of the search for a common ground. Rhetoric such as assumptions about democratic countries developing and using AI weapons “ethically” in contrast to practices performed by “other” states abounded at the conference. Such views reinforce a reification of the “us” vs. “them/the other” binary, which contrast with a commitment to dialogue, collaboration and joint action that is needed in this space.

When it comes to the use of AI technologies in the military domain, these presumed boundaries between “democratic” and “other” states often do not exist. Insights during my recent fieldwork in China (October 2022-January 2023), observations at the UN GGE, interactions with the Chinese delegation at the REAIM summit, and an analysis of policy documents all show that Chinese stakeholders, for example, are confronting similar dilemmas. These include struggles to keep military AI technologies in control and data secured, while at the same time not undermining efforts to safeguard domestic interests and national security, including those associated with integrating AI technologies (see also Qiao-Franco and Bode, 2023; Qiao-Franco and Zhu, 2022). The quest for safe, responsible and controllable AI use appears to be a shared concern, regardless of whether they hold the same political and ideological values.

 

Re-aim REAIM?

In order for REAIM or any multilateral dialogues intended to provide new directions for regulating AI applications in the military domain to realise their potential, it is important to deepen the debate on a substantive level so that contemporary and future issues are contextualized and understood in terms of past and ongoing practices regarding the trajectory of increasing autonomy in weapons systems.

In another paper, Dr Ingvild Bode and I stressed the importance of “systematically examining domestically performed practices” spanning the economy, defence, technology, and diplomacy which involve various sets of actors that sustain ideas about human-machine interaction, either tilting towards allowing for more human control or for more machine autonomy. States should “encourage practices that promote the responsible use of AI and expose, discredit, and end practices that may lead to abuses of AI”. This prerogative involves curbing dangerous civilian AI uses that have spillover effects into the military realm and tracking state arms competitions that may cause premature deployment of insufficiently tested AI weapons.

To improve international dialogues, conversations should go deeper and reveal looming complexities surrounding the integration of autonomous and AI technologies into weapons systems. Such conversations must be direct, inclusive, tangible, and transparent with a sense of urgency. I hope that the next planned REAIM summit in South Korea will “re-aim” dialogue in these much-needed ways.

Featured image credit: Zheng Lefeng, Centre for International Security and Strategy, Tsinghua University 

Share this article

Related articles