Research Article

Topics: Human - Machine Interaction

Toolkit of Best Practices to Strengthen Human Agency in the Military Domain

Routine uses of artificial intelligence (AI) technologies affect military personnel’s capacity to exercise human agency. The impacts of using AI on the exercise of human agency in the military domain should be addressed via bottom-up forms of governance. Such a process starts with the practices performed by stakeholders in technical, military, legal, and political fields.

The AutoPractices toolkit includes best practices which contribute to sustaining and strengthening the exercise of human agency in the development and use of military systems integrating AI technologies. These best practices concern personnel at various levels of military command, as well as in development, design, engineering, analysis, and other human-led fields across the lifecycle of AI systems.

The toolkit is the capstone output of the European Research Council (ERC)- funded AutoPractices project, which initiated a process of social innovation to govern AI technologies in military systems from the bottom up. This process involved the participation of 49 stakeholders representing diverse disciplines and all regions of the world. The stakeholders co-created the toolkit based on their experiences and fields of expertise.

The AutoPractices project takes as its starting point the assumption that the exercise of human agency in the development and deployment of AI systems should be sustained and strengthened across these systems’ lifecycle. The need for exercising a sufficient level of human agency is a recognised governance principle across global and regional initiatives on AI in the military domain. However, most initiatives in this space aim to set high-level legal and ethical principles on the human role in the use of force from the top down.

The bottom-up approach pursued by the AutoPractices project is complementary to these initiatives. It recognises that practices performed by various stakeholders in technical, military, and political spaces shape the implementation of high-level principles and thereby what key norms and guidelines mean at the practical level. At present, some of these practices appear to unintentionally lead to accepting a compromised, diminished quality of human agency in the use of force.

Yet, implementing best practices from the bottom up can also shape a positive version of human agency, ensuring a genuinely high quality of that agency rather than a nominal one. This toolkit identifies the practices that contribute to recontextualising the emerging norm of human agency into this positive version. AutoPractices’ bottom-up approach to governing AI in the military domain is crucial for achieving the widely shared high-level principle of human agency over use-of-force decision-making.

The best practices toolkit

This toolkit results from a co-creative process in which the stakeholders played an active role from start to finish. The AutoPractices team gathered practical insights from stakeholders via surveys, interviews, and workshops. Consequently, the practices compiled in the toolkit reflect those stakeholders’ viewpoints. Throughout the project, stakeholders reviewed the two main AutoPractices outputs—the Map of Practices and the draft toolkit—and the AutoPractices team refined these documents based on stakeholders’ feedback.

This practical toolkit is intended for political decision-makers, designers, technical experts, and military personnel involved in the development and use of military systems integrating AI technologies. It may also be useful for researchers and scholars across various disciplines, civil society, and other stakeholders involved in the global debate on governing AI in the military domain.

The toolkit is particularly relevant for AI systems integrated into the complex process of use-of-force decision-making. However, it does not apply to any specific set of systems, whether autonomous weapon systems (AWS) or AI-based decision-support systems (AI DSS). Rather, the co-creation process involved a broad discussion, with the aim of “laying out a framework that can be translated by a lot of different people into practical implementation”, as one stakeholder put it.

Summary of recommendations and take-aways

The various practices along the lifecycle that the AutoPractices co-creation process has revealed are the recommendations we put forward to sustain and strengthen the exercise of human agency. Based on these practices, we highlight four key take-aways for policymakers, developers, and users of AI-based systems in the military domain:

A. Establish boundaries at the early stages of the AI lifecycle

Determining specific objectives to the development and use of AI systems, including limitations and boundaries to the applications of these systems, early in the lifecycle, strengthens the exercise of human agency and humans’ ability to make meaningful choices down the line.

B. Implement feedback and communication mechanisms across the AI lifecycle

Constant feedback and communication mechanisms between actors involved at various lifecycle stages contribute to enabling human understanding of AI systems and their uses within specific contexts. This would enhance transparency and thereby human accountability, as well as prevent undue AI influence.

C. Prioritise education and training across the AI lifecycle

The continuous education and training of humans across lifecycle stages (policymakers, developers, testers, operators, and users, among others) play a crucial role in raising humans’ AI literacy and awareness about dynamics of human-AI interaction. This mitigates some of the risks arising from these dynamics in operating contexts and contributes to strengthening the exercise of human agency within those contexts.

D. Adopt strategies and steps to implement the practices listed in this toolkit

Developing and adopting their own, concrete approaches to operationalising the practices presented in this toolkit would allow policymakers to go beyond high-level principles on responsible and ethical AI, while accounting for specific types of systems, conditions, and contexts.

AutoPractices is a Proof of Concept project funded by the European Research Council and based upon AutoNorms. For more information, see www.autonorms.eu/autopractices.  

Featured image credit: insung yoon via Unsplash

Share this article

Related articles

Human - Machine Interaction

Governing AI Technologies in the Military Domain from the Bottom Up: Map of Practices

As of September 2025, there are no international legally binding regulations specific to the development and deployment of AI technologies in the military domain. Top-down, state-led approaches to global governance in this area continue to face challenges such as different regulatory positions, competing interests, and diverging visions of the role

Read More »
Technology

Why Investigating Tech Startups in Algorithmic Warfare Matters

On 29 May 2025, the tech startup Anduril Industries and the Big Tech company Meta announced a privately funded collaboration to design and develop extended reality products integrating AI for the US military. This is just one recent example of private tech companies’ growing involvement in supplying the US government

Read More »