This research theme examines US practices in relation to weapons systems with automated and automated features. These include operational practices of design, development, and deployment, but also extend to a wider range, including US evolving stances as delivered in the context of the Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS). This is complemented by an analysis of practices performed by civilian developers of AI applications (in relationship with military actors) and how cultural-specific, often fictional representations of weaponised AI and robotics shape public discourse. Practices performed across these different societal contexts in the US are considered as potentially productive of norms.
Articles on United States
[A shorter version of this piece was published in the German-language ct Magazin für Computertechnik in September 2021, Ingvild Bode & Tom Watts] The United States and its NATO partners have ignominiously withdrawn from Afghanistan. One of this war’s many legacies will be the use of remotely piloted aircraft – colloquially referred to as
“Listen, and understand. That Terminator is out there, it can’t be bargained with, it can’t be reasoned with, it doesn’t feel pity or remorse or fear, and it absolutely will not stop…EVER, until you are dead!”, Kyle Reese. References to the Terminator are an ubiquitous feature of debates on Autonomous
This short contribution addresses the National Security Commission on Artificial Intelligence (NSCAI) report recently published in the United States (US). This report marks an important step in defining the US’ future AI security policy and can be expected to influence the US position on questions relating to the regulation and
An international research project examining weaponised artificial intelligence, norms, and order