Three Takeaways from the US Military-Anthropic Dispute

By Anna Nadibaidze and Robin Vanderborght The public dispute that erupted recently between the US Department of War (formerly Department of Defense) and the technology company Anthropic has captured the attention of global media and experts. Anthropic has developed the popular large-language model (LLM) Claude, which is not only used by private consumers worldwide, but […]
Governing AI Technologies in the Military Domain from the Bottom Up: Map of Practices

As of September 2025, there are no international legally binding regulations specific to the development and deployment of AI technologies in the military domain. Top-down, state-led approaches to global governance in this area continue to face challenges such as different regulatory positions, competing interests, and diverging visions of the role of AI technologies in warfare […]
Why Investigating Tech Startups in Algorithmic Warfare Matters

On 29 May 2025, the tech startup Anduril Industries and the Big Tech company Meta announced a privately funded collaboration to design and develop extended reality products integrating AI for the US military. This is just one recent example of private tech companies’ growing involvement in supplying the US government with AI-based products for defence […]
AI in Military Decision Support Systems: A Review of Developments and Debates

Download the report here By Anna Nadibaidze, Ingvild Bode, and Qiaochu Zhang A new report published by the Center for War Studies at the University of Southern Denmark reviews developments and debates related to AI-based decision support systems (AI DSS) in military decision-making on the use of force. Written by Anna Nadibaidze, Ingvild Bode, and […]
The New AutoPractices Project: Toward Governing AI Technologies in Military Decision-Making from the Bottom Up

On 1 June 2024, the AutoNorms team started a new policy-oriented project called AutoPractices. The purpose of the AutoPractices project is to initiate and accompany a process of social innovation to govern autonomous and AI technologies (AIT) in the military domain from the bottom up. The project aims to do this by addressing the practices […]
‘Traditional Values’: The Russian Leadership’s Narrative about Generative AI

In February 2024, Vladimir Putin approved a new version of Russia’s national artificial intelligence (AI) development strategy, initially adopted in October 2019. One of the updates included is a list of challenges to Russia’s AI development, which mention “the decision to restrict access to AI technologies, caused by unfair competition on the part of unfriendly […]
Loitering Munitions Report Online Launch Event

On 8th December 2023 13.00-14.15 (CET)/12.00-13.15 (GMT), an expert panel (including Laura Bruun, Stockholm International Peace Research Institute) will discuss the major findings of the “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” report published earlier this year. You can register to attend this online event here. Co-authored by Dr. […]
Five Questions We Often Get Asked About AI in Weapon Systems and Our Answers

By Anna Nadibaidze and Ingvild Bode The ongoing integration of artificial intelligence (AI) and autonomous technologies in weapon systems raises many questions across a variety of fields, including ethics, law, philosophy, and international security. As part of the AutoNorms project, we have contributed to many of these discussions over the past three years, including through […]
Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control

Download the report here By Ingvild Bode and Tom Watts A new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security highlights the immediate need to regulate autonomous weapon systems, or ‘killer robots’ as they are colloquially called. Written by Dr. Ingvild Bode and Dr. […]
Consequences of Using AI-Based Decision-Making Support Systems for Affected Populations

The following essay builds on remarks delivered by Ingvild Bode as part of the Expert Workshop “AI and Related Technologies in Military Decision-Making on the Use of Force”, organised by the International Committee of the Red Cross (ICRC) & Geneva Academy Joint Initiative on the Digitalization of Armed Conflict on 8 November 2022. I want […]