About

Weaponised Artificial Intelligence, Norms, and Order

Weapon systems that integrate an increasing number of automated or autonomous features raise the question of whether humans will remain in direct, meaningful control of the use of force. Concerns relate, in particular, to weapon systems with autonomy in their critical functions – that is selecting, and engaging targets through sensors and algorithms without human input. Such autonomous features can take many different forms: we find them inter alia in loitering munitions, in aerial combat vehicles, in stationary sentries, in counter-drone systems, in air defence systems, in surface vehicles, and in ground vehicles. While diverse, these systems are captured by the catch-all category of autonomous weapons systems (AWS), because they weaponise Artificial Intelligence (AI). 

Many states consider applying force without any human control as unacceptable. But there is less consensus about various complex forms of human-machine interaction along a spectrum of autonomy and the precise point(s) at which human control stops being meaningful. To illustrate, can human control that executes decisions based on indications it has received from a computer be considered meaningful, given that “blackboxed”, algorithmic processing is not accessible to human reasoning? Faced with these questions in the transnational debate at the UN Convention on Certain Conventional Weapons (CCW), states reach different conclusions: some states, supported by civil society organizations, advocate introducing new legal norms to prohibit so-called fully autonomous weapons, while other states leave the field of open in order to increase their room of manoeuvre.  

As discussions drag on with little substantial progress, the operational trend towards including automated and autonomous features in weapon systems continues. A majority of the top 10 arms exporters such as the USA, China, and Russia, are developing or planning to develop some form of AWS. Such dynamics point to a potentially influential trajectory AWS may take towards changing what is appropriate when it comes to the use of autonomously applied force. Technologies have always shaped and altered warfare and therefore how force is used and perceived. Yet, the role that technology plays should not be conceived in deterministic terms. Rather, technology is ambivalent, making how it is used in international relations, and especially in warfare, a political question.

Can human control that executes decisions based on indications it has received from a computer be considered meaningful, given that “blackboxed”, algorithmic processing is not accessible to human reasoning?

Research question, contribution and research objectives

The AutoNorms project (08/2020-07/2025) addresses these uncertainties in providing answers to its main research question: to what extent will AWS shape and transform international norms governing the use of violent force? Answering this research question is crucial because norms, defined broadly as understandings of appropriateness that evolve in practices, sustain and shape the international security order. While the rules-based order has been remarkably resilient, it currently finds itself increasingly subject to internal and external challenges. Monitoring changing practices and norms on the use of force will allow us to understand their repercussions for the fundamental character of international order.  

Existing International Relations (IR) research on norms does not enable us to understand the dynamics of this vital process because it does not capture how norms emerge and develop procedurally. Despite making excellent and critical contributions, the state of the art has two limitations: first, conceptually, it connects norms predominantly to international law. Constructivist approaches to norm research typically take legally institutionalised norms as their starting point and thereby presuppose a comparatively stable, determinate normative structure that can be contested. Second, the state of the art restricts attention chiefly to how norms emerge in deliberative international fora, considering norms predominantly as the outcome of open, public debate. But as attempts at legal regulation lag behind technological developments in weapons systems, the emergence of norms in practices is a typical empirical facet in this field. In other words, current approaches, while diverse, work from the top down in taking institutionalised international law as a starting point for examining how international norms change practices rather than the reverse.  

However, the empirical picture is much more dynamic than its current academic conceptualisation would have us think. The AutoNorms project will develop a new theoretical approach allowing us to study the bottom-up process of how norms manifest and develop in practices. Practices are patterned ways of doing things in different social contexts. This inter-disciplinary concept combines sociology, constructivist IR, and critical legal scholarship in accentuating the constitutive quality of practices as sites of norm emergence and change. A focus on practices allows the AutoNorms project to study the micro-level of norm emergence in the field of AWS from the bottom up, going beyond and outside of formal norm-codification activities on a macro-level.

The AutoNorms project pursues three research objectives:  

  1. To analyse how and under what conditions norms emerge and change in practices. 
  2. To analyse how understandings of perceived appropriateness about autonomising the critical functions of weapons systems emerge and evolve across military, transnational political, dual-use, and popular imagination contexts in four countries (China, Japan, Russia, USA). 
  3. To investigate how emerging norms on AWS will affect the make-up of the current international security order. 

The AutoNorms project will develop a new theoretical approach allowing us to study the bottom-up process of how norms manifest and develop in practices.

Case studies and methods

In choosing which kind of practices to study, the AutoNorms Project works via case studies of prominently positioned states in the international security field that represent varied positions: China, Japan, Russia, and the USA. Their practices are likely to be particularly significant in shaping emerging norms. The USA have been the most open and vocal about its pursuit of systems with increasingly autonomous featuresWhile reports indicate that Russian arms manufacturers are developing systems with autonomous features in their critical functions, Russian diplomats still question the mere existence of AWS. China is the only developer of AWS to support a legal ban on fully autonomous systems, while their definition of what characterises such systems includes unconventional elements. Japan has a world-leading robotics industry and has become a growing voice in the debate, favouring a “wait and see” approach.  

To gain access to practices, the AutoNorms project combines five comparative, qualitative methods: (1) building a qualitative, technological database of weapons systems with automated and autonomous features; (2) narrative interviewing; (3) participant observation; (4) visual analysis; (5) and public attitude surveys.