Research Article

Topics: Human - Machine Interaction, Technology

Governing AI Technologies in the Military Domain from the Bottom Up: Map of Practices

As of September 2025, there are no international legally binding regulations specific to the development and deployment of AI technologies in the military domain. Top-down, state-led approaches to global governance in this area continue to face challenges such as different regulatory positions, competing interests, and diverging visions of the role of AI technologies in warfare and more broadly. 

As the AutoNorms project has found, current practices in the design, training personnel for, and use of AI technologies in military systems have the potential to lead to a diminished form of human agency as a ‘norm’ in certain contexts of use-of-force decision-making. A reduced form of human agency in military targeting raises ethical, legal, security, and operational concerns which are insufficiently addressed by sets of broad and often ambiguous principles featuring in current top-down frameworks. 

Considering this global challenge, in June 2024 the AutoNorms team launched the project “Governing AI Technologies in Military Systems from the Bottom Up: Practices to Sustain and Strengthen Human Agency”, or AutoPractices. The purpose of the AutoPractices project is to initiate and accompany a process of social innovation to govern AI technologies in the military domain from the bottom up. 

It does so by co-creating a set of ‘best practices’ in the form of a practical toolkit to sustain and strengthen the exercise of human agency when developing and using military systems integrating AI technologies. The final operational toolkit, to be finalised by December 2025, will be co-created together with stakeholders who represent different professional backgrounds and geographies. This process of social innovation can recontextualise the emerging norm of human agency into a positive version through stakeholders enacting different practices from the bottom up. 

The map of practices 

As part of the process of social innovation towards the operational toolkit, in September 2025 the AutoPractices project published a map of practices which encompasses practices that participating stakeholders identified as contributing to sustaining and strengthening the exercise of human agency in the development and use of military systems integrating AI. 

This map is based on data from 1) an open-ended online survey questionnaire completed by stakeholders between October 2024 to March 2025 via the SurveyMonkey platform and 2) in-person and online interviews conducted with stakeholders who gave consent. The language used in the draft map of practices attempts to stay as close as possible to the language used by stakeholders in their survey and interview responses.   

The map of practices is meant as a transitional step towards the final operational toolkit. 

AutoPractices is a Proof of Concept (PoC) project funded by the European Research Council. For more information, see www.autonorms.eu/autopractices 

Featured image credit: jerry chen on Unsplash

Share this article

Related articles

United States of America

Trump’s AI Race Action Plan: Removing Barriers to a Militarized Silicon Valley

Guest post by Tommaso Del Becaro The publication of the long-awaited America’s AI Action Plan by the federal government of the United States in July 2025 represents a crucial juncture in AI governance. Aimed at maintaining the US’ “unquestioned and unchallenged global technological dominance”, the AI Action Plan is the

Read More »
Technology

Why Investigating Tech Startups in Algorithmic Warfare Matters

On 29 May 2025, the tech startup Anduril Industries and the Big Tech company Meta announced a privately funded collaboration to design and develop extended reality products integrating AI for the US military. This is just one recent example of private tech companies’ growing involvement in supplying the US government

Read More »