Publications

Research articles and other publications by the AutoNorms team

Technology in the Quest for Status: The Russian Leadership’s AI Narrative

Anna Nadibaidze •  16 March 2024

In an article published with the Journal of International Relations and Development, Anna Nadibaidze examines the mismatch between the Russian leadership’s AI narrative and the country’s technological capabilities via the lens of Russia’s quest for great power status and ontological security. She shows the need to scrutinise narratives surrounding technology, especially AI technologies and their associated ambiguities, as part of how states deal with the constant uncertainty about recognition of their self-perceived identity. Based on an analysis of textual and visual documents collected via open-access sources, the article finds that the Russian official AI narrative embeds three of the elements forming Russia’s conception of a great power, namely the ability to compete, modernise, and attain technological sovereignty. Although the official rhetoric does not match the reality of Russian capabilities, the narrative is used as a cognitive tool in the quest for identity during times of uncertainty.

ICRC Humanitarian Law & Policy Blog

Ingvild Bode •  14 March 2024

In a short piece for the ICRC Humanitarian Law & Policy blog, Ingvild Bode argues that bias is as much a social as a technical problem and that addressing it therefore requires going beyond technical solutions. She holds that the risks of algorithmic bias need to receive more dedicated attention as the Group of Governmental Experts on LAWS’ work turns towards thinking around operationalisation. These arguments are based on Ingvild’s presentation at the GGE side event “Fixing Gender Glitches in Military AI: Mitigating Unintended Biases and Tackling Risks” organised by the United Nations Institute for Disarmament Research (UNIDIR) on 6 March 2024.

Article in Security Dialogue

Guangyu Qiao-Franco •  31 January 2024

Security Dialogue published the article “Insurmountable Enemies or Easy Targets? Military-themed Videogame ‘Translations’ of Weaponized Artificial Intelligence” by Guangyu Qiao-Franco, co-authored with Paolo Franco (Radboud University, the Netherlands). 

International relations scholarship has long emphasized that popular culture can impact public understandings and political realities. This article explores these potentials in the context of military-themed videogames and their portrayals of weaponized artificial intelligence (AI). Within paradoxical videogame representations of AI weapons both as ‘insurmountable enemies’ that pose existential threats to humankind in narratives and as ‘easy targets’ that human protagonists routinely overcome in gameplay, the authors identify distortions of human–machine interaction that contradict real-world scenarios. By leveraging the Actor-Network Theory concept of ‘translation’, the authors explain how these distorted portrayals of AI weapons are produced by entanglements between heterogeneous human and non-human actors that aim to make videogames mass-marketable and profitable. 

Special issue on communities of practice in Global Studies Quarterly

Ingvild Bode and Guangyu Qiao-Franco •  27 January 2024

Ingvild Bode and Guangyu Qiao-Franco published an article each as part of the special issue “International Communities of Practices and Social Ordering” in Global Studies Quarterly 4(1). This special forum was edited by Emanuel Adler, Niklas Bremberg, and Maïka Sondarjee. 

In her article “Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems“, Ingvild draws on practice theories, science and technology studies, and critical norm research. She argues that a constellation of communities of practice shapes the public debate about LAWS.

Meanwhile, Guangyu’s article “An Emergent Community of Cyber Sovereignty: The Reproduction of Boundaries?” probes the boundary-work of Communities of Practice by examining China’s active efforts in advancing a state-centric approach in managing cyberspace in the international arena. 

Introduction to Special Issue on Algorithmic Warfare 

AutoNorms •  8 January 2024

In a new article published in Global Society, Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco, and Tom Watts take stock of of the ongoing debates on algorithmic warfare in the social sciences.

The article “Algorithmic Warfare: Taking Stock of a Research Programme” seeks to equip scholars in International Relations and beyond with a critical review of both the empirical context of algorithmic warfare and the different theoretical approaches to studying practices related to the integration of algorithms into international armed conflict. The review focuses on discussions about (1) the implications of algorithmic warfare for strategic stability, (2) the morality and ethics of algorithmic warfare, (3) how algorithmic warfare relates to the laws and norms of war, and (4) popular imaginaries of algorithmic warfare.

This article serves as the introduction to a Special Issue on the Algorithmic Turn In Security and Warfare published in Global Society 38(1) and edited by Ingvild Bode and Guangyu Qiao-Franco. 

Written Submission to the UN Office of the Secretary General’s Envoy on Technology

Ingvild Bode, Hendrik Huelss, Anna Nadibaidze & Tom Watts •  28 September 2023

The AutoNorms team has submitted a written contribution to the United Nations Office of the Secretary General’s Envoy on Technology. In preparation for the first meeting of the Multi-stakeholder High-level Advisory Body on Artificial Intelligence, the Office issued a call for papers on global AI governance. AutoNorms’ submission touches upon the issue of global governance of AI technologies in the military domain. Read it in full here.

Article published in Cooperation and Conflict

Tom Watts & Ingvild Bode •  23 September 2023

Cooperation and Conflict has published Tom Watts’ and Ingvild Bode’s article “Machine guardians: The Terminator, AI narratives and US regulatory discourse on lethal autonomous weapons systems”. References to the Terminator films are central to Western imaginaries of Lethal Autonomous Weapons Systems (LAWS). The puzzle of whether references to the Terminator franchise have featured in the United States’ international regulatory discourse on these technologies nevertheless remains underexplored. Bringing the growing study of AI narratives into a greater dialogue with the International Relations literature on popular culture and world politics, this article unpacks the repository of different stories told about intelligent machines in the first two Terminator films. Through an interpretivist analysis of this material, Watts and Bode examine whether these AI narratives have featured in the US written contributions to the international regulatory debates on LAWS at the United Nations Convention on Certain Conventional Weapons in the period between 2014 and 2022. Their analysis highlights how hopeful stories about ‘machine guardians’ have been mirrored in these statements: LAWS development has been presented as a means of protecting humans from physical harm, enacting the commands of human decision makers and using force with superhuman levels of accuracy. This suggests that, contrary to existing interpretations, the various stories told about intelligent machines in the Terminator franchise can be mobilised to both support and oppose the possible regulation of these technologies.

Blog contribution

Ingvild Bode & Tom Watts •  29 June 2023

In a contribution to the ICRC Humanitarian Law & Policy Blog, Ingvild Bode and Tom Watts highlight the need for legally binding rules on AWS based on their research about the development and use of loitering munitions. They write, “we do not need to go to dystopian sci-fi narratives to imagine potential problems associated with AWS. There are already problems at hand in how states design and use weapon systems integrating autonomous technologies in targeting in particular ways”.

Loitering Munitions and Unpredictability

Ingvild Bode & Tom Watts •  7 June 2023

A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called.

Written by Dr. Ingvild Bode and Dr. Tom F.A. Watts, the “Loitering Munitions and Unpredictability” report examines whether the use of automated, autonomous, and artificial intelligence (AI) technologies as part of the global development, testing, and fielding of loitering munitions since the 1980s has impacted emerging practices and social norms of human control over the use of force. It is commonly assumed that the challenges generated by the weaponization of autonomy will materialize in the near to medium term future.

The report’s central argument is that whilst most existing loitering munitions are operated by a human who authorizes strikes against system-designated targets, the integration of automated and autonomous technologies into these weapons has created worrying precedents deserving of greater public scrutiny.

Read the full report here.

Article in Heidelberg Journal of International Law

Ingvild Bode •  May 2023

Ingvild Bode’s article “Contesting Use of Force Norms through Technological Practices” has been published in the Heidelberg Journal of International Law (HJIL) as part of a symposium on the contestation of the laws of war. This article examines the practice of targeted killing in the context of jus contra bellum and the emerging norm of ‘meaningful’ human control in jus in bello. It combines norm research with scholarship across critical international law, practice theories,  and science and technology studies to examine the emergence of contested areas in between the international normative and legal orders. Read the article here.

How can the EU regulate military AI?

Ingvild Bode & Hendrik Huelss •  29 May 2023

Writing in The Academic, Ingvild Bode and Hendrik Huelss analyse the EU’s ambivalent stance as a hesitant regulator of military AI. They argue that the EU’s position results in two significant consequences, both of which favour a specific type of technical, corporate expertise. Firstly, the EU’s modest attempts at establishing rules on military AI attract technical, and corporate experts to contribute their proficiency as part of advisory panels. Secondly, the EU finds itself becoming a rule-taker, as its member states utilise military applications of AI that embody design choices made by these technical, corporate experts. 

Written evidence submitted to the House of Lords Select Committee on AI in Weapon Systems

The AutoNorms team •  4 May 2023

The AutoNorms team has submitted written evidence to the UK House of Lords AI in Weapon Systems Select Committee as part of its enquiry on AI in weapon systems.

Read the evidence submitted by Ingvild Bode, Hendrik Huelss, and Anna Nadibaidze here.

Read the evidence submitted by Tom Watts here.

The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses

Anna Nadibaidze • 26 April 2023

In their article published in the Journal for Peace and Nuclear Disarmament, Anna Nadibaidze and Nicolò Miotto argue that the relationship between AI and strategic stability is not only given through the technical nature of AI, but also constructed by policymakers’ beliefs about these technologies and other states’ intentions to use them. Adopting a constructivist perspective, they investigate how decision-makers from the United States and Russia talk about military AI by analyzing US and Russian official discourses from 2014–2023 and 2017-2023, respectively.

Nadibaidze and Miotto conclude that both sides have constructed a threat out of their perceived competitors’ AI capabilities, reflecting their broader perspectives of strategic stability, as well as the social context characterized by distrust and feelings of competition. Their discourses fuel a cycle of misperceptions which could be addressed via confidence building measures. However, this competitive cycle is unlikely to improve due to ongoing tensions following the Russian invasion of Ukraine. The article was published as part of a Special Issue on Strategic Stability in the 21st Century, edited by Ulrich Kühn. 

Article published in European Journal of International Relations

Ingvild Bode • 10 April 2023

In the article “Practice-based and public-deliberative normativity: retaining human control over the use of force”, published in the European Journal of International Relations, Ingvild Bode theorises how practices of designing, of training personnel for, and of operating weapon systems integrating autonomous technologies have shaped normativity/normality on human control at sites unseen. She traces how this normativity/normality interacts with public deliberations at the Group of Governmental Experts (GGE) on LAWS by theorising potential dynamics of interaction. Bode argues that the normativity/normality emerging from practices performed in relation to weapon systems integrating autonomous technologies assigns humans a reduced role in specific use of force decisions and understands this diminished decision-making capacity as ‘appropriate’ and ‘normal’. 

Analysis of Russia’s narratives on military AI and autonomy

Anna Nadibaidze • 3 March 2023

In an article for the Network for Strategic Analysis (NSA), Anna Nadibaidze analyses how Russia’s ‘low-tech’ war on Ukraine discredited its military modernization narrative, of which drones and AI have been a key element. She argues, “Russia’s full-scale invasion of Ukraine revealed the mismatch between the narrative Moscow has been promoting and the reality of Russian military technological capabilities”.

The article is also available in French on the website of Le Rubicon.

Article in Journal of European Public Policy

Ingvild Bode & Hendrik Huelss • 14 February 2023

The Journal of European Public Policy has published “Constructing expertise: the front- and back-door regulation of AI’s military applications in the European Union” by Ingvild Bode and Hendrik Huelss. This article is part of a Special Issue on the Regulatory Security State in Europe, co-edited by Andreas Kruck and Moritz Weiss.

The article investigates how the EU as a multi-level system aims at regulating military artificial intelligence (AI) based on epistemic authority. It suggests that the EU acts as a rule-maker and a rule-taker of military AI predicated on constructing private, corporate actors as experts. As a rule-maker, the EU has set up expert panels such as the Global Tech Panel to inform its initiatives, thereby inviting corporate actors to become part of its decision-making process through the front-door. But the EU is also a rule-taker in that its approach to regulating on military AI is shaped through the backdoor by how corporate actors design AI technologies. These observations signal an emerging hybrid regulatory security state based on ‘liquid’ forms of epistemic authority that empowers corporate actors but also denotes a complex mix of formal political and informal expert authority.

The need for and nature of a normative, cultural psychology of weaponized AI

Ingvild Bode  • 6 February 2023

Ingvild Bode co-authored the article “The need for and nature of a normative, cultural psychology of weaponized AI (artificial intelligence)” with Rockwell Clancy and Qin Zhu from the Department of Engineering Education, Virginia Polytechnic Institute and State University. The article was published in Ethics and Information Technology as part of the collection on Responsible AI in Military Applications.

This position piece describes the motivations for and sketches the nature of a normative, cultural psychology of weaponized AI. The motivations for this project include the increasingly global, cross-cultural and international, nature of technologies, and counter-intuitive nature of normative thoughts and behaviors. The nature of this project consists in developing standardized measures of AI ethical reasoning and intuitions, coupled with questions exploring the development of norms, administered and validated across different cultural groups and disciplinary contexts. The goal of this piece is not to provide a comprehensive framework for understanding the cultural facets and psychological dimensions of weaponized AI but, rather, to outline in broad terms the contours of an emerging research agenda.

Article in Ethics and Information Technology

Ingvild Bode, Hendrik Huelss, Anna Nadibaidze, Guangyu Qiao-Franco & Tom Watts • 3 February 2023

The AutoNorms team’s article “Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices” argues for the necessity to adopt legal norms on the use and development of autonomous weapon systems (AWS). Without a framework for global regulation, state practices in using weapon systems with AI-based and autonomous features will continue to shape the norms of warfare and affect the level and quality of human control in the use of force. By examining the practices of China, Russia, and the United States in their pursuit of AWS-related technologies and participation at the UN CCW debate, we acknowledge that their differing approaches make it challenging for states parties to reach an agreement on regulation, especially in a forum based on consensus. Nevertheless, we argue that global governance on AWS is not impossible. It will depend on the extent to which an actor or group of actors would be ready to take the lead on an alternative process outside of the CCW, inspired by the direction of travel given by previous arms control and weapons ban initiatives.

The article has been published in Ethics and Information Technology as part of the collection on Responsible AI in Military Applications.

Article in The Chinese Journal of International Politics 

Guangyu Qiao-Franco & Ingvild Bode • 9 January 2023

In the article “Weaponised Artificial Intelligence and Chinese Practices of Human–Machine Interaction”, published in the Chinese Journal of International Politics, Guangyu Qiao-Franco and Ingvild Bode unpack China’s understanding of human–machine interaction. Despite repeatedly supporting a legal ban on lethal autonomous weapons systems (LAWS), China simultaneously promotes a narrow understanding of these systems that intends to exclude such systems from what it deems “beneficial” uses of AI. This article offers understandings of this ambivalent position by investigating how it is constituted through Chinese actors’ competing practices in the areas of economy, science and technology, defence, and diplomacy. Such practices produce normative understandings of human control and machine autonomy that pull China’s position on LAWS in different directions. Qiao-Franco and Bode contribute to the scholarship bounded by norm research and international practice theories in examining how normativity originates in and emerges from diverse domestic contexts within competing practices. They also aim to provide insights into possible approaches whereby to achieve consensus in debates on regulating LAWS, which at the time of writing have reached a stalemate.

Article published in Journal of Contemporary China

Guangyu Qiao-Franco • 1 December 2022

The article “China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice”, by Guangyu Qiao-Franco and Rongsheng Zhu from Tsinghua University, has been published in the Journal of Contemporary China. Extant literature has not fully accounted for the changes underway in China’s perspectives on the ethical risks of artificial intelligence (AI). This article develops a community-of-practice (CoP) approach to the study of Chinese policymaking in the field of AI. It shows that the Chinese approach to ethical AI emerges from the communication of practices of a relatively stable group of actors from three domains—the government, academia, and the private sector. This Chinese CoP is actively cultivated and led by government actors. The paper draws attention to CoP configurations during collective situated-learning and problem-solving among its members that inform the evolution of Chinese ethical concerns of AI. In so doing, it demonstrates how a practice-oriented approach can contribute to interpreting Chinese politics on AI governance. 

Publication of analysis piece

Anna Nadibaidze • 8 September 2022

Writing for the Foreign Policy Research Institute (FPRI) blog, Anna Nadibaidze analyses the Russian leadership’s narrative on technological sovereignty. She argues, “The fact that Russia’s leadership is pushing this narrative suggests that the goal is, instead, to provide a sense of ontological security and intensify the belief in Russia’s identity as a great power”. Read the full piece here

Publication in La Vanguardia

Anna Nadibaidze • 9 June 2022

Anna Nadibaidze contributed to Dossier, a trimestral publication by the Barcelona-based newspaper La Vanguardia. Her text “Weaponized Artificial Intelligence in the Nuclear Domain” (translated into Spanish) appeared in Dossier #84, entitled “Nuclear Rearmament”.

Article published in Contemporary Security Policy

Anna Nadibaidze • 19 May 2022

Anna Nadibaidze’s article “Great power identity in Russia’s position on autonomous weapons systems”, published in Contemporary Security Policy, proposes an identity-based analysis of the Russian position in the global debate on AWS. Based on an interpretation of Russian written and verbal statements submitted to the United Nations Convention on Certain Conventional Weapons (CCW) meetings from 2014 to 2022, Nadibaidze finds that two key integral elements of Russian great power identity—the promotion of multipolarity and the recognition of Russia’s equal participation in global affairs—guide its evolving position on the potential regulation of AWS. The analysis makes an empirical contribution by examining one of the most active participants in the CCW discussion, an opponent to any new regulations of so-called “killer robots,” and a developer of autonomy in weapons systems. It highlights the value of a more thorough understanding of the ideas guiding the Russian position, assisting actors who seek a ban on AWS in crafting their responses and strategies in the debate.

Online publication in Le Rubicon

Anna Nadibaidze • 3 May 2022

In an online piece (in French) published in Le Rubicon, Anna Nadibaidze explores the different pathways available for the regulation of autonomous weapons. She notes the importance of moving forward in the AWS discussion, whether at the UN or as part of an independent process.

Publication of analytical piece in German

Ingvild Bode & Anna Nadibaidze • April 2022

Ingvild Bode and Anna Nadibaidze contributed to the magazine Ct Magazin für Computertechnik with the article “Von wegen intelligent: Autonome Drohnen und KI-Waffen im Ukraine-Krieg” (Not really intelligent: Autonomous Drones and Weaponised AI in the Ukraine War). 

Read the article in German here.

Book publication

Ingvild Bode & Hendrik Huelss • January 2022

Autonomous Weapons Systems and International Norms, by Ingvild Bode and Hendrik Huelss, has been published by McGill-Queen’s University Press.

In Autonomous Weapons Systems and International Norms Ingvild Bode and Hendrik Huelss present an innovative study of how testing, developing, and using weapons systems with autonomous features shapes ethical and legal norms, and how standards manifest and change in practice. Autonomous weapons systems are not a matter for the distant future – some autonomous features, such as in air defence systems, have been in use for decades. They have already incrementally changed use-of-force norms by setting emerging standards for what counts as meaningful human control. As UN discussions drag on with minimal progress, the trend towards autonomizing weapons systems continues.

A thought-provoking and urgent book, Autonomous Weapons Systems and International Norms provides an in-depth analysis of the normative repercussions of weaponizing artificial intelligence.

Report on Russian perceptions of military AI, automation, and autonomy

Anna Nadibaidze • 27 January 2022

In a report published by the Foreign Policy Research Institute (FPRI), Anna Nadibaidze provides an overview of the different conceptions and motivations that have been guiding Russian political and military leaderships in their ambitions to pursue weaponised AI. 

The report is available on the FPRI website.

Publication of essay by the GCSP

Anna Nadibaidze • 18 January 2022

Anna Nadibaidze’s essay “Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security” was published by the Geneva Centre for Security Policy (CGSP). The essay received first prize ex-aequo in the 2021 OSCE-IFSH Essay Competition on Conventional Arms Control and Confidence- and Security-Building Measures in Europe.

Publication of analysis in E-International Relations

Tom Watts • 15 December 2021 

Tom Watts co-authored the article “Remote Warfare: A Debate Worth the Buzz?” with Rubrick Biegon and Vladimir Rauta. The piece, published online by E-International Relations, explores the different meanings of remote warfare and implications of this analytical concept for future scholarship.

Read it here.

Publication of special issue on remote warfare

Tom Watts • November 2021 

Tom Watts co-edited the “Remote Warfare and Conflict in the Twenty-First Century” issue of Defence Studies (Volume 21, Issue 4) along with Rubrick Biegon and Vladimir Rauta. He also co-authored two articles within the special issue:

Written contribution to the UN CCW Group of Governmental Experts on LAWS 

 AutoNorms • September 2021 

The AutoNorms team submitted a written contribution to the Chair of the Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS), in preparation for the GGE’s second session which took place 24 September – 1 October 2021. The contribution addressed one of the Chair’s guiding questions, “How would the analysis of existing weapons systems help elaborate on the range of factors that should be considered in determining the quality and extent of human-machine interaction/human control/human judgment?”

Read the contribution here.

Opinion piece in TheArticle

 Anna Nadibaidze • 15 September 2021 

In an opinion piece for TheArticle, Anna Nadibaidze argues that while the debate on the potential regulation of lethal autonomous weapons systems at the UN is stalling, interested states parties will continue to pursue the development of weaponised artificial intelligence, further contributing to the multi-dimensional challenges brought by these technologies.

Read the piece here.

Publication of analysis in the German-language Ct Magazin für Computertechnik

 Ingvild Bode & Tom Watts • September 2021 

In a piece published with the German-language magazine Ct Magazin für Computertechnik, Ingvild Bode and Tom Watts examine the role and technical capabilities of some of the drone technologies used by the United States as part of the war in Afghanistan.

The German-language version of the text can be accessed here, and a longer English-language version has also been made available on the AutoNorms website.

Written evidence submitted to the Foreign Affairs Committee enquiry on “Tech and the future of UK foreign policy”

Ingvild Bode, Anna Nadibaidze, Hendrik Huelss & Tom Watts • June 2021 

The AutoNorms team has submitted to the UK House of Commons Foreign Affairs Committee as part of its enquiry on “Tech and the future of UK foreign policy”. The written evidence made a series of recommendations for how the UK Government should act to shape and directly influence AI governance norms. These included calling for the UK to clarify its stance on the role and quality of human control it considers appropriate in the use of force and acknowledging that setting a positive obligation for maintaining human control in specific use of force situations is a crucial step in regulating weaponised AI.

Read the written evidence here.

Analytical essay in Global Cooperation Research – A Quarterly Magazine 

Ingvild BodeApril 2021 

In this piece Ingvild Bode examines practice theories as an evolving theoretical programme in the discipline of International Relations. She argues that practice theories have much to gain from remaining diverse in their groundings and actively expanding that diversity beyond the current “canon”. She considers engagements with critical security studies, critical norm research, and Science and Technology Studies particularly useful. Bode also argues for a deeper theorisation of how both verbal and non-verbal practices produce and shape norms.

Read the article here.

Analysis in the Bulletin of the Atomic Scientists 

Ingvild Bode & Tom Watts 21 April 2021 

This analysis piece by Ingvild Bode and Tom Watts summarises their research on air defence systems in the context of the debate on lethal autonomous weapons systems (LAWS). They argue that looking at such historic and currently employed systems illustrates pertinent risks associated with their use.   

Read the article here.

Publication of a policy report on air defence systems

Ingvild Bode & Tom Watts • February 2021 

The policy report “Meaning-less Human Control”, written by Ingvild Bode and Tom Watts and published in collaboration with Drone Wars UK, argues that decades of using air defence systems with automated and autonomous features have incrementally diminished meaningful human control over specific use of force situations. The report argues that this process shapes an emerging norm, a standard of appropriateness, among states. This norm attributes humans a diminished role in specific use of force decisions. However, the international debate on LAWS is yet to acknowledge or scrutinize this norm. If this continues, the potential international efforts to regulate LAWS through codifying meaningful human control will be undermined.  

Read the report here. The catalogue on automation and autonomy in air defence systems can be accessed here.

Book chapter on AI, weapons systems, and human control

Ingvild Bode & Hendrik Huelss • 16 February 2021 

Ingvild Bode and Hendrik Huelss contributed to the book Remote Warfare: Interdisciplinary Perspectives, edited by Alasdair McKay, Abigail Watson and Megan Karlshøj-Pedersen, and published by E-International Relations. Their chapter, “Artificial Intelligence, Weapons Systems and Human Control”, discusses the impact that increasingly autonomous features in weapons systems can have on human decision-making in warfare. 

Read the chapter here.

Publication of analysis in The Conversation 

Ingvild Bode • 15 October 2020 

Writing after the September 2020 discussions of the GGE on LAWS, Ingvild Bode examines the extent to which CCW states parties agree on retaining meaningful human control over the use of force. She argues that many states champion a distributed perspective which considers how human control is present across the entire life-cycle of the weapons. Acknowledging that this reflects operational reality, Ingvild’s analysis also presents drawbacks of this perspective: it runs the risk of making human control more nebulous and distracting from how human control is exerted in specific use of force situations.

Read the article here.

Publication of project description in The Project Repository Journal 

Ingvild Bode • July 2020

In this piece Ingvild Bode maps out the research agenda for the ERC-funded AutoNorms project. The article offers a short overview of- AutoNorms’ research background and objectives, as well as the envisaged contribution that the project intends to make over the next five years (pp. 140-143). 

Read the article here.