Research Article

Topics: Human - Machine Interaction, Political Process

Reflecting on the Future Norms of Warfare

[The following essay builds on a contribution submitted by Ingvild Bode to the RUSI/HRI project ”The Future Rules of Warfare”. The essay reflects on how current norms of conflict and warfare might be changing.]

The legal norms enshrined in the UN Charta, as well as associated legal frameworks such as the Geneva Conventions, the International Bill of Human Rights, and weapon-specific disarmament treaties have created a relatively stable set of rules for state behaviour in conflict and warfare. States share the belief that they are bound by rules that are voluntarily agreed upon and that provide a certain expectation of behaviour.

At the same time, these legal norms have always been unstable in two ways. First, as the outcomes of diplomatic compromise, they contain elements of ambiguity. International law provides a baseline but indeterminate structure for action. This leaves room for interpretations that “depend on what one regards as politically right, or just”. Second, legal norms do not only restrain state behaviour, but they also make certain types of behaviour permissible. The language of law is therefore also a currency that states use to legitimise their actions. For example, the focus on individual and collective self-defence as the only legitimate unilateral justification for military intervention – a position informed by the UN Charter’s general prohibition on the use of force – de facto “encourages states to go to war under the banner of self-defence”. The language of law can also be generally interpreted as permissibility. In other words, what is legal can be acceptable or allowable even if the ethics of such actions are uncertain. It is important to recognize these politics of international law as they can be utilised by all states.

As the dynamics of state responses undertaken vis-à-vis non-state actors in the last 20 years underline, these politics complicate studying potentially disruptive changes in international law. How to accommodate the use of force against violent non-state actors in an international system structured around sovereign, territorial states has long been a problem. State practices in the aftermath of 9/11 have weakened previously comparatively solid standards of attribution and imminence and broadened the practice of targeted killing. The availability of technology, notably armed drones, has played a prominent role in the expansion of these practices.

Yet, it is unclear whether these practices may lead to long-term changes in international law. First, state practices are far from uniform and do not conform to the recognized standards of customary international law. Customary international law only accounts for state practices that point to “a general practice accepted as law”, that manifest in a consistently stated belief in the applicability of a particular rule. But this is not what we are seeing. Only some states – among them the US, the UK, France, Turkey, and Saudi Arabia – have explicitly argued in favour of expanded understandings of attribution, imminence, or the practice of targeted killing. What is more, these states have used varied, justifications for their practices. The “unwilling or unable” formula illustrates this. It is employed to justify the use of force against violent non-state actors on the territory of a “host” state that, whilst not necessarily having any links to the non-state group, is perceived to be “unwilling or unable” to remove them. But only the US, Turkey, and the UK (to an extent) have used this argument.

Second, most states have remained silent when it comes to addressing the evolving legal norms of conflict and warfare discussed above. What does their silence mean? Can it be read as tacit acquiescence, and does it have any legal significance? It is difficult to answer these questions. Many legal scholars do not read silence as support. However, they also point to how silence can contribute to a new range of norms being “accepted” by default. This is problematic because such new legal norms would not be international in scope.

Despite such uncertainties, what is clear is that state practices contribute significantly to muddying the legal-normative waters, thereby increasing the scope of contested areas in international law. This risks lowering the general thresholds towards using force as justifications for military intervention can be more ‘easily’ found within elastic, contested areas of international law.

How norms change

When thinking about norms and how they change, it is insufficient to only consider legal norms. This is a major research gap when it comes to mapping norm change in conflict and warfare. While the legal norms of the laws on armed conflict, human rights, and humanitarian law apply in general to all situations of conflict, the history of warfare shows how the introduction of new weapon technologies can point to significant gaps in the legal framework. States have often used weapon technologies long before they agree on specific, novel legal norms regulating their use. In the meantime, states still develop ways and patterns of using such weapon technologies, including understandings of what the “appropriate” uses of these technologies are. In other words, how states use novel weapons in practice can lead to the emergence of norms. These are not legal norms, but social norms, centred around understandings of appropriateness. These are often implicit social understandings that are not explicitly codified and agreed upon by states in writing.

The debate around lethal autonomous weapon systems (LAWS) that “select and apply force to targets without human intervention” provides a highly relevant illustration of such dynamics. The debate has turned to outlining a requirement for human control and judgment. This may eventually become a new legal norm of if states are able to negotiate new international law to govern LAWS.

But the concern about an impending loss of human control is framed as a future concern. This is puzzling given that there is a trajectory of weapon systems including automated and autonomous features starting in the 1970s. These include air defence systems, guided missiles, active protection systems, counter-drone systems, and loitering munitions. State practices of designing and using such systems have already incrementally shaped a norm of what counts as an “appropriate” quality of human control and judgement. This norm sets a diminished quality of human control as being suitable for war and conflict.

Such processes have been implicit and remain to be explicitly addressed. They point to two crucial dynamics in the context of examining norms in conflict and warfare: first, norms of conflict and warfare do not only change or are not only set in a public debate between states. Instead, they can also emerge in state practices that are performed over long periods of time and may therefore not be subject to closer, open scrutiny. Second, we may have two simultaneous processes of norm emergence at hand when it comes to new weapon systems: a public, legal process and a silent, more implicit, social process.

It may take decades for new legal norms to take shape in the public, legal process: AWS have been discussed by the international community since 2014 and the process remains at the discussion stage. While progress seems glacial, it is still swift in comparison to past processes that have eventually led to the establishment of new legal norms, such as the banning of anti-personnel landmines. In the meantime, social norms are being set in how states design and use weapon systems with autonomous features.

We may therefore continue to have two parallel processes shaping the norms of war and conflict that do not necessarily overlap. Further, even if states agree on setting a legal norm defining a necessary quality of human control, that legal norm is likely going to be ambiguous in character, providing states with a significant amount of leeway. States may therefore continue to engage in use of force practices with AWS in much the same way as before the legal norm was in place. Norms that have emerged as part of practices of designing and using AWS therefore run the risk of undercutting deliberative legal efforts. To counter these dynamics, it is vital that such silent norm-making processes are closely examined closely and publicly expressed.

Autonomous weapon systems and an emerging norm of human control

The debate about autonomous weapons systems poses fundamental questions to the extent to which the use of force in conflict and warfare, as well as the very application of international law, remains in human hands. At first glance, practically all states parties addressing the Group of Governmental Experts (GGE) on emerging technologies in the area of LAWS highlight the importance of maintaining human control over the use of (lethal) force. The most substantial outcome of the GGE yet, the Guiding Principles on LAWS, includes a principle on human-machine interaction. We can therefore observe the potential making of a new legal norm, if states proceed towards a negotiation stage. This option has arguably gathered steam after the ICRC’s clear positioning in favour of new international law around LAWS in May 2021.

But any consensus on what quality of human control is appropriate is yet to emerge. Many states favour a long-term view on human control as something that should be present throughout the entire life cycle of a weapon system from design to operation. Further, the US and Australia have argued that autonomous features can enhance human control in specific use of force situations by “effectuating the intent of commanders”. This perspective treats AI as a straightforward extension of human agency. Such thinking downplays the complexity of human-machine interaction and how this challenges the decision-making capacity of humans operating (or working in teams) with AI-driven weapon systems. Indeed, we must consider the extent to which the technology itself, having been designed and conceptualised in a certain way, can itself become a change agent for shaping (new) legal norms.

Further, in terms of other factors that may continue to shape/disrupt current norms of conflict and warfare, we should also consider the arguments that states offer. State discourse often points to the inevitability of integrating more and more AI into weapon systems. The reason for this is often two-fold: first, states map out weaponised AI as part of a steady, irresistible process that they cannot resist. Second, we have also seen an emphasis on great power competition resurfacing. From a US (and a UK) perspective, it is necessary to include ever more AI into weapon systems because China (or to a lesser extent Russia) are doing it. China and Russia, in turn, speak of the same necessity as a reaction to US (and allied states) moves. What gets lost in these dynamics is that both options represent particular, not exclusive courses of action that policymakers can or cannot engage in.

In sum, tracking how norms of conflict and warfare change, especially considering emerging technologies, requires going beyond a narrow and toward a critical understanding of international law. I argue that we can expect to see norms, when defined broadly as understandings of appropriateness, emerging in practices of designing, training for, and operating (novel) weapon systems. How these norms relate to standards enshrined in international law is an active research question.

Share this article

Share on facebook
Share on twitter
Share on linkedin
Share on email

Related articles

China

A Rare East-West Alignment? Learning about Chinese Positions on Weaponised Artificial Intelligence from the Chinese-language Literature

The development, deployment and use of autonomous weapons systems (AWS) is a subject of growing scholarly debate. China is no exception to this trend. To provide a better understanding of Chinese perceptions of weaponised Artificial Intelligence (AI), this article briefly examines relevant Chinese-language sources available from the Knowledge Resource Integrated

Read More »
Technology

Drones in Afghanistan: Not a Technological “Silver Bullet”

[A shorter version of this piece was published in the German-language ct Magazin für Computertechnik in September 2021, Ingvild Bode & Tom Watts] The United States and its NATO partners have ignominiously withdrawn from Afghanistan. One of this war’s many legacies will be the use of remotely piloted aircraft – colloquially referred to as

Read More »