Publications by the AutoNorms team


Bode, I. and Huelss, H. (2022). Autonomous Weapons Systems and International Norms. McGill – Queen’s University Press.

Book chapters

Bode, I. and Qiao-Franco, G. (Accepted/In print). “AI Geopolitics and International Relations”. In Handbook on Public Policy and AI, edited by Paul, R., Carmel, E., and Cobbe, J. Cheltenham: Edward Elgar.

Bode, I. and Nadibaidze, A. (Accepted/In print). “Autonomous Drones”. In The De Gruyter Handbook of Drone Warfare, edited by Rogers, J. Berlin: De Gruyter. 

Bode, I. and Huelss, H. (2021). “The Future of Remote Warfare? Artificial Intelligence, Weapons Systems and Human Control.” In Remote Warfare: Interdisciplinary Perspectives, edited by McKay, A., Watson, A. and Karlshøj-Pedersen, M. Bristol: E-International Relations Publishing, 218–33.

Peer-reviewed articles

Nadibaidze, A. (2024). Technology in the quest for status: The Russian leadership’s artificial intelligence narrative. Journal of International Relations and Development.

Qiao-Franco, G. and Franco, P. (2024). Insurmountable enemies or easy targets? Military-themed videogame ‘translations’ of weaponized artificial intelligence. Security Dialogue, 55(1), 81-102.

Bode, I. (2024). Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems. Global Studies Quarterly 4(1).

Qiao-Franco, G. (2024). An Emergent Community of Cyber Sovereignty: The Reproduction of Boundaries?. Global Studies Quarterly 4(1).

Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G. and Watts, T.F.A. (2024). Algorithmic Warfare: Taking Stock of a Research Programme. Global Society 38(1).

Watts, T.F.A. and Bode, I. (2024). Machine guardians: The Terminator, AI narratives and US regulatory discourse on lethal autonomous weapons systems. Cooperation and Conflict 59(1), 107-128.

Bode, I. (2023). Contesting Use-of-Force Norms through Technological Practices. Heidelberg Journal of International Law 83(1), 39-64.

Nadibaidze, A. and Miotto, N. (2023). The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses. Journal for Peace and Nuclear Disarmament 6(1), 47-67.

Bode, I. (2023). Practice-based and public-deliberative normativity: retaining human control over the use of force. European Journal of International Relations 29(4), 990-1016.

Bode, I., and Huelss, H. (2023). Constructing expertise: the front- and back-door regulation of AI’s military applications in the European Union. Journal of European Public Policy 30(7), 1230-1254.

Clancy, R., Bode, I., and Zhu, Q. (2023). The need for and nature of a normative, cultural psychology of weaponized AI. Ethics and Information Technology 25(6).

Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G. and Watts, T.F.A. (2023). Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices. Ethics and Information Technology 25(5).

Qiao-Franco, G. and Bode, I. (2023). Weaponised Artificial Intelligence and Chinese Practices of Human–Machine Interaction. The Chinese Journal of International Politics 16(1), 106-128.

Qiao-Franco, G. and  Zhu, R. (2022). China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice. Journal of Contemporary China 33(146), 189-205.

Nadibaidze, A. (2022). Great Power Identity in Russia’s Position on Autonomous Weapons Systems. Contemporary Security Policy, 43(3), 407-43.

Biegon, R. and Watts, T.F.A. (2022). Remote Warfare and the Retooling of American Primacy. Geopolitics, 27(3), 948-971.

Biegon, R., Rauta, V. and Watts, T.F.A. (2021). Remote Warfare – Buzzword or Buzzkill? Defence Studies, 21(4), 427-446.

Watts, T.F.A. and Biegon, R. (2021). Revisiting the Remoteness of Remote Warfare: US Military Intervention in Libya During Obama’s Presidency. Defence Studies, 21(4), 508-527.

Huelss, H. (2020). Norms Are What Machines Make of Them: Autonomous Weapons Systems and the Normative Implications of Human-Machine Interactions. International Political Sociology, 14(2), 111–28.

Bode, I. and Huelss, H. (2019). Introduction to the Special Section: The Autonomisation of Weapons Systems: Challenges to International Relations. Global Policy, 10(3), 327–30.

Bode, I. (2019). Norm‐making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems. Global Policy, 10(3), 359–64.

Huelss, H. (2019). Deciding on Appropriate Use of Force: Human‐machine Interaction in Weapons Systems and Emerging Norms. Global Policy, 10(3), 354–58.

Bode, I. and Huelss, H. (2018). Autonomous Weapons Systems and Changing Norms in International Relations. Review of International Studies, 44(3), 393–413.

Huelss, H. (2017). After Decision-Making: The Operationalization of Norms in International Relations. International Theory, 9(3), 381–409.


Bode, I. and Watts, T.F.A. (2023). Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control. Center for War Studies & Royal Holloway Centre for International Security.

Nadibaidze, A. (2022). Russian Perceptions of Military AI, Automation, and Autonomy. Foreign Policy Research Institute.

Nadibaidze, A. (2022). Commitment to Control Weaponised Artificial Intelligence: A Step Forward for the OSCE and European Security. Geneva Centre for Security Policy.

Bode, I. and Watts, T.F.A. (2021). Meaning-Less Human Control: Lessons from Air Defence Systems on Meaningful Human Control for the Debate on AWS. Drone Wars UK & Center for War Studies, University of Southern Denmark.

Other publications

Bode, I. (2024). Falling under the radar: the problem of algorithmic bias and military applications of AI. ICRC Humanitarian Law & Policy Blog.

Bode, I. and Watts, T.F.A. (2023). Loitering munitions: flagging an urgent need for legally binding rules for autonomy in weapon systems. ICRC Humanitarian Law & Policy Blog

Bode, I., Huelss, H., and Nadibaidze, A. (2023). Kunstig intelligens i krig (Artificial Intelligence in Warfare). Jysk Fynske Medier i Erhverv+. [in Danish]

Nadibaidze, A. (2023). La guerre « low-tech » de la Russie contre l’Ukraine a discrédité son récit de modernisation militaire. Le Rubicon. [in French]

Nadibaidze, A. (2022). Understanding Russia’s Efforts at Technological Sovereignty. Foreign Policy Research Institute Blog.

Nadibaidze, A. (2022). “La inteligencia artificial militarizada en al ámbito nuclear” (Weaponized Artificial Intelligence in the Nuclear Domain). La Vanguardia Dossier No. 84. [in Spanish]

Nadibaidze, A. (2022). Russian Great Power Identity in the Debate on ‘Killer Robots’. Contemporary Security Policy Blog.

Nadibaidze, A. (2022). Quel futur pour le débat international sur les systèmes d’armes autonomes?. Le Rubicon. [in French]

Bode, I., and Nadibaidze, A. (2022). “Von wegen intelligent: Autonome Drohnen und KI-Waffen im Ukraine-Krieg” (Not really intelligent: Autonomous Drones and Weaponised AI in the Ukraine War). Ct Magazin für Computertechnik 10/2022. [in German]

Biegon, R., Rauta, V., and Watts, T.F.A. (2021). Remote Warfare: A Debate Worth the Buzz? E-International Relations.

Nadibaidze, A. (2021). The AI Arms Race: How Can We Control the Use of Killer Robots? TheArticle.

Bode, I. and Watts, T.F.A. (2021). “Vereitelte Drohnenaufklärung: Die USA halten Opferzahlen von Drohneneinsätzen zurück” (Drones in Afghanistan: Not a Technological ‘Silver Bullet’). Ct Magazin für Computertechnik [in German].

Bode, I. (2021). Practice Theories and Critical Security Studies. Global Cooperation Research: A Quarterly Magazine 1/2021, 8-10.

Bode, I. and Watts, T.F.A. (2021). Worried about the Autonomous Weapons of the Future? Look at What’s Already Gone Wrong. Bulletin of the Atomic Scientists.

Bode, I. (2020). The Threat of ‘Killer Robots is Real and Closer than You Might Think’. The Conversation.

Bode, I. (2018). AI has Already Been Weaponised – And It Shows Why We Should Ban ‘Killer Robots’. The Conversation.

Bode, I. and Huelss, H. (2017). Why ‘Stupid’ Machines Matter: Autonomous Weapons and Shifting Norms. Bulletin of the Atomic Scientists.


1. International debate about AWS at the UN CCW 











2. Reports
3. Journal articles
  • Altmann, J. and Sauer, F. (2017). Autonomous Weapon Systems and Strategic Stability. Survival, 59(5), 117-142. 
  • Alwardt, C. and Schörnig, N. (2022). A Necessary Step Back? Recovering the Security Perspective in the Debate on Lethal Autonomy. Zeitschrift für Friedens- und Konfliktforschung.
  • Amoroso, D., and Tamburrini, G . (2020). Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues. Current Robotics Report, 1, 187–194. 
  • Amoroso, D. and Tamburrini, G. (2021). In Search of the ‘Human Element’: International Debates on Regulating Autonomous Weapons Systems. The International Spectator, 56(1), 20-38. 
  • Anderson, K. and  Waxman, M.C. (2013). Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. Columbia Public Law Research Paper. 
  • Arkin, R. (2013). Lethal Autonomous Systems and the Plight of the Non-Combatant.  AISB Quarterly, 137, 1–9. 
  • Asaro, P. (2019). Algorithms of Violence: Critical Social Perspectives on Autonomous Weapons. Social Research: An International Quarterly, 86(2), 537–555. 
  • Boutin, B. (2022). State Responsibility in Relation to Military Applications of Artificial Intelligence. Leiden Journal of International Law, 1-18.
  • Crootof, R. (2015). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review, 36(5), 1837-1916. 
  • Ekelhof, M. (2017). Complications of a Common Language: Why it is so Hard to Talk about Autonomous Weapons. Journal of Conflict and Security Law, 22(2), 311-331. 
  • Ekelhof, M. (2019). Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation. Global Policy, 10(3), 343–348. 
  • Garcia, D. (2018). Lethal Artificial Intelligence and Change: The Future of International Peace and Security. International Studies Review, 20, 334–41. 
  • Gill, A. S. (2019). Artificial Intelligence and International Security: The Long View. Ethics & International Affairs, 33(2), 169-179.  
  • Goldfarb, A. and Lindsay, J. R. (2022). Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War. International Security, 46(3), 7-50.
  • Haas, M.C. and Fischer, S.C. (2017). The Evolution of Targeted Killing Practices: Autonomous Weapons, Future Conflict, and the International Order. Contemporary Security Policy, 38(2), 281-306. 
  • Haner, J. and Garcia, D. (2019). The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Global Policy, 10(3), 331-337. 
  • Heyns, C. (2016). Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement. Human Rights Quarterly, 38(2), 350-378. 
  • Horowitz, M. C. (2016). Why Words Matter: The Real World Consequences of Defining Autonomous Weapons Systems. Temple International and Comparative Law Journal, 30(1), 85–98. 
  • Horowitz, M.C. (2019). When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability. Journal of Strategic Studies, 42(6), 764-788. 
  • Hynek, N. and Solovyeva, A. (2021). Operations of Power in Autonomous Weapon Systems: Ethical Conditions and Socio-Political Prospects. AI & Society, 36(1), 79-99.
  • Jensen, B.M, Whyte, C., and Cuomo, S. (2020). Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence. International Studies Review, 22(3), 526-550.
  • Johnson, J. (2019). Artificial Intelligence & Future Warfare: Implications for International Security.  Defense & Security Analysis, 35(2), 147-169.
  • Johnson, J. (2020). Artificial Intelligence: A Threat to Strategic Stability. Strategic Studies Quarterly, 14(1), 16–39
  • Leys, N. (2018). Autonomous Weapon Systems and International Crises. Strategic Studies Quarterly, 12(1), 48-73. 
  • Maas, M. (2019). How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons. Contemporary Security Policy, 40(3), 285-311. 
  • Payne, K. (2018). Artificial Intelligence: A Revolution in Strategic Affairs? Survival, 60(5), 7–32.  
  • Petersson, M. (2021). Small States and Autonomous Systems – the Scandinavian Case. Journal of Strategic Studies, 44(4), 594-612. 
  • Roff, H.M. (2014). The Strategic Robot Problem: Lethal Autonomous Weapons in War. Journal of Military Ethics, 13, 211–227. 
  • Rosendorf, O. (2021). Predictors of Support for a Ban on Killer Robots: Preventive Arms Control as an Anticipatory Response to Military Innovation. Contemporary Security Policy, 42(1), 30-52. 
  • Rosendorf, O., Smetana, M., and Vranka, M. (2022). Autonomous Weapons and Ethical Judgments: Experimental Evidence on Attitudes Toward the Military Use of “Killer Robots”. Peace and Conflict: Journal of Peace Psychology, 28(2), 177-183.
  • Rosert, E. and Sauer, F. (2021). How (Not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies. Contemporary Security Policy, 42(1), 4-29. 
  • Rosert, E. and Sauer, F. (2019). Prohibiting Autonomous Weapons: Put Human Dignity First. Global Policy, 10(3), 370-375. 
  • Sauer, F. and Schörnig, N. (2012). Killer Drones: The ‘Silver Bullet’ of Democratic Warfare? Security Dialogue, 43(4), 363–80. 
  • Sauer, Frank. 2020. Stepping Back from the Brink: Why Multilateral Regulation of Autonomy in Weapons Systems Is Difficult, Yet Imperative and Feasible. International Review of the Red Cross, 102(913), 235–59. 
  • Schwarz, E. (2021). Autonomous Weapons Systems, Artificial Intelligence, and the Problem of Meaningful Human Control. The Philosophical Journal of Conflict and Violence, 5(1), 53-72.
  • Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75–87. 
  • Sharkey, N. (2010). Saying ‘No!’ to Lethal Autonomous Targeting.  Journal of Military Ethics, 9(4), 369-383.
  • Suchman, L. (2020). Algorithmic Warfare and the Reinvention of Accuracy.  Critical Studies on Security, 8(2), 175-187.
  • Verbruggen, M. (2019). The Role of Civilian Innovation in the Development of Lethal Autonomous Weapon Systems. Global Policy, 10(3), 338-342. 
  • Verdiesen, I., Santoni de Sio, F. & Dignum, V. (2021). Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight. Minds & Machines, 31, 137–163.  
  • Wyatt, A. (2020). Charting Great Power Progress toward a Lethal Autonomous Weapon System Demonstration Point, Defence Studies, 20(1), 1-20. 
4. Country case studies
United States 

Domestic policies and laws: 

  • 2015: Made in China 2025 Plan (中国制造 2025) [Chinese version] 
  • 2015: The Guidance of the State Council on Actively Promoting the “Internet Plus” Action (关于积极推进“互联网+”行动的指导意见[Chinese version]
  • 2016: The 13th Five Year National Science and Technology and Innovation Plan (十三五国家科技创新规划 ) [Chinese version] 
  • 2016: Three-Year Action Implementation Plan for “Internet +” (“互联网 +”人工 智能三年行动实施方案[Chinese version]
  • 2016: Three-Year Action Implementation Plan for “Internet +” (“互联网 +”人工 智能三年行动实施方案[Chinese version] 
  • 2017: State Council’s Plan for the Development of New Generation Artificial Intelligence (国务院关于印发新一代人工智能发展规划的通知). [Chinese version]  
  • 2017: The Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry (2018-2020) (促进新一代人工智能产业发展三年行动计划(2018 —2020 )) [Chinese version] 
  • 2017: Opinions of the General Office of the State Council on Promoting Closer Civil-Military Integration in the National Defence Science and Technology Industry (关于推动国防科技工业军民融合深度发展的意见[Chinese version] [English version] 
  • 2018: Action Plan for Artificial Intelligence Innovation in Universities (高等学校人工智能创新行动计划) [Chinese version]
  • 2018: The White Paper of Artificial Intelligence Standardization (人工智能安全标准化白皮书(2018版)[Chinese version] 
  • 2019: The White Paper of Artificial Intelligence Standardization (人工智能安全标准化白皮书(2019版)[Chinese version] 
  • 2019: The White Paper on “China’s National Defence in the New Era” (新时代的中国国防[Chinese version] [English Version]
  • 2019: 2019 Report on the Work of the Government (2019年政府工作报告) [Chinese version]
  • 2019: Next Generation AI Governance Principles — Developing Responsible AI (新一代人工智能治理原则——发展负责任的人工智能) [Chinese version]
  • 2019 人工智能伦理风险分析报告 Report of Ethical Risk Analysis of Artificial intelligence [Chinese version]
  • 2019 Guidelines for the Construction of the National New Generation AI Innovation and Development Pilot Zones国家新一代人工智能创新发展试验区建设工作指引 [Chinese version]
  • 2020 Proposal to Fully Leverage the Forces of AI to Jointly Fight COVID19 Pandamic工信部《充分发挥人工智能赋能效用 协力抗击新型冠状病毒感染的肺炎疫情倡议书》[Chinese version]
  • 2020 Measures to support the resumption of work and production and the smooth operation of the economy via scientific and technological innovation科技部《关于科技创新支撑复工复产和经济平稳运行的若干措施》 [Chinese version]
  • 2020 Suggestions on promoting the integration of disciplines and accelerating the cultivation of postgraduates in the field of AI关于双一流建设高校促进学科融合 加快人工智能领域研究生培养的若干意见 [Chinese version]
  • 2020 The Legislative Work Plan of the NPC Standing Committee for 2020全国人大常委会2020年度立法工作计划 [Chinese version]
  • 2020 Guidelines for the Construction of the National New Generation AI Innovation and Development Pilot Zones (revised edition) 国家新一代人工智能创新发展试验区建设工作指引(修订版)[Chinese version]

Articles and reports: 

  • Akimoto, D. (2019). International Regulation of “Lethal Autonomous Weapons Systems” (LAWS): Paradigms of Policy Debate in Japan. Asian Journal of Peacebuilding, 7(2), 311-33.