This research theme examines US practices in relation to weapons systems with automated and automated features. These include operational practices of design, development, and deployment, but also extend to a wider range, including US evolving stances as delivered in the context of the Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS). This is complemented by an analysis of practices performed by civilian developers of AI applications (in relationship with military actors) and how cultural-specific, often fictional representations of weaponised AI and robotics shape public discourse. Practices performed across these different societal contexts in the US are considered as potentially productive of norms.
Articles on United States
This short contribution addresses the National Security Commission on Artificial Intelligence (NSCAI) report recently published in the United States (US). This report marks an important step in defining the US’ future AI security policy and can be expected to influence the US position on questions relating to the regulation and
An international research project examining weaponised artificial intelligence, norms, and order