The extent to which humans remain in direct control of the use of force and the quality of that control are key themes animating the international debate on autonomous weapons systems. This research theme examines these concerns in the context of practices of human-machine interaction and how they shape emerging use of force norms, including an emerging norm of “meaningful” human control.
Articles on human-machine interaction
Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control
Download the report here By Ingvild Bode and Tom Watts A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by
A lack of or a substantially diminished quality of human control is often understood as the major problem associated with military AI. The US Department of Defense (DoD) ‘Directive 3000.09’ that was released in 2012 as one of the first political documents on autonomy in weapon systems, for example, states
The following essay builds on remarks delivered by Ingvild Bode as part of the Expert Workshop “AI and Related Technologies in Military Decision-Making on the Use of Force”, organised by the International Committee of the Red Cross (ICRC) & Geneva Academy Joint Initiative on the Digitalization of Armed Conflict on
[The following essay builds on a contribution submitted by Ingvild Bode to the RUSI/HRI project ”The Future Rules of Warfare”. The essay reflects on how current norms of conflict and warfare might be changing.] The legal norms enshrined in the UN Charta, as well as associated legal frameworks such as
An international research project examining weaponised artificial intelligence, norms, and order