Research Article

Topics: All, Political Process

AutoPractices at the UN GGE on LAWS in March 2026

The AutoNorms team regularly participates in meetings of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems. The GGE meetings take place in Geneva and bring together delegations of state parties to the UN Convention on Certain Conventional Weapons (CCW), observer states, as well as representatives of civil society and academia. The AutoNorms team represented the AutoPractices project at the meeting of the first GGE session in 2026 (2-6 March 2026). 

Credit: Ruofei Wang
Statement to the GGE on LAWS on 3 March 2026

On 3 March 2026, Anna Nadibaidze delivered the following statement on behalf of the AutoPractices project, in response to Box III of the GGE on LAWS rolling text (version of 18 December 2025).

Thank you, Chair, for giving me the floor.

I am speaking on behalf of AutoPractices, an international research project hosted by the Center for War Studies at the University of Southern Denmark. We are grateful for the opportunity to observe the GGE’s debates and share our expertise.

Chair, Distinguished Delegates, please allow me to make some remarks regarding the paragraphs 7 and 8 of Box III of the rolling text: the paragraphs concerning measures to ensure context-appropriate human judgement and control.

These remarks are based on the findings of the recently concluded AutoPractices project and its main output published in collaboration with the Stockholm International Peace Research Institute: a toolkit of best practices and measures to sustain the exercise of human judgement and control across the lifecycle of AI systems in the military domain. The best practices in this toolkit have been identified by 49 stakeholders who represent different disciplines and regions of the world.

The first remark concerns paragraph 7 and the sub-paragraph A. Sub-paragraph A matches one of the main takeaways of the AutoPractices project, namely that the exercise of human judgement and control is strengthened by conducting continuous legal, ethical, and security assessments that are appropriate to the context of development and use. The AutoPractices toolkit highlights that such assessments should be conducted throughout the lifecycle of LAWS and especially at the early stages, even prior to the development of weapon systems.

The second remark relates to sub-paragraphs B-E, which also match some of conclusions reached by our project’s stakeholders. One takeaway is that the exercise of human judgement and control is strengthened by setting boundaries and limitations to the development and employment of LAWS. This is particularly important for the early stages of the lifecycle, as whatever happens at those early stages affects the quality of human judgement and control later on, including at the use of force stage.

Setting specific objectives and establishing boundaries for developing and using LAWS strengthen humans’ understanding of how these systems work in particular contexts, and humans’ ability to reasonably foresee systems’ impacts within those contexts. Boundaries and constraints can be spatial, temporal, as well as relating to the type of targets, the targeting parameters, or the operational environment—as suggested by sub-paragraph B.

AutoPractices stakeholders highlight that limitations and restrictions should reflect political, legal, ethical, and societal considerations surrounding the integration of AI systems into the military domain. These considerations should be informed by feedback loops and communication mechanisms from the end-users of systems, such as operators. This key takeaway on boundaries and limitations to LAWS also speaks to sub-paragraphs C, D and E of paragraph 7.

Another of our project’s findings is the importance of mechanisms ensuring human intervention, which matches sub-paragraph C on ensuring that parameters of LAWS are not significantly modified by the systems. As AutoPractices stakeholders highlight, intervention mechanisms that contribute to ensuring that LAWS are employed in ways that align with strategic, operational, legal, ethical and other considerations applicable to the context, as well as the limitations set at the early stages of the lifecycle, strengthen the exercise of context-appropriate human judgement and control.

These mechanisms also contribute to establishing safeguards for LAWS to be employed in ways that do not supplant or replace human decision-making, which is particularly important for tracing accountability back to human actors along the lifecycle. These mechanisms therefore reinforce sub-paragraph A’s point on responsible chain of command and control. Feedback loops and continuous communication between end-users and developers of LAWS can be useful practices to assist in delimiting responsibilities and objectives of employing AI technologies in the relevant contexts. 

Risk assessment frameworks that are intended to provide benchmarks to ensure that LAWS are used for intended objectives are also a best practice highlighted by our project. Robust metrics for evaluating outputs and parameters of LAWS and their wide-ranging implications for operations can support relevant actors to engage in the intervention mechanisms mentioned. Risk assessment frameworks are closely connected to the sets of practices highlighted previously: for example, they may include intervention mechanisms for humans. They can also include post-use review mechanisms, highlighting the value of a lifecycle perspective to retaining context-appropriate human judgement and control in the use of force. 

Finally, our project adopts a lifecycle approach and includes sets of best practices at each stage of the lifecycle of AI systems in the military domain. This approach matches the message conveyed by paragraph 8 in box III: the importance of applying measures and practices such as limitations, intervention mechanisms, feedback loops, and risk assessment frameworks, throughout the design, development, and use of LAWS.

At the same time, we support the representative of the University of Utrecht’s previous suggestion to consider using the wording “throughout the entire lifecycle of LAWS” (as in Box IV), as this would represent a more comprehensive perspective and take into account other stages than design, development, and use, such as the pre-development stage and post-use stages of the lifecycle.

Thank you, Distinguished Delegates, for your attention, and thank you, Chair.

Credit: UN Web TV
Statement to the GGE on LAWS on 4 March 2026

On 4 March 2026, Ingvild Bode delivered the following statement on behalf of the AutoPractices project, in response to Box IV of the GGE on LAWS rolling text (version of 18 December 2025).

Thank you, Chair, for giving me the floor.

I am speaking on behalf of the AutoPractices project, an international research project hosted by the Center for War Studies at the University of Southern Denmark.

We would like to start by adding some brief reflections around the significance of the human dimension across the boxes and the specific phrasing “context-appropriate human judgment and control”. As many delegations have expressed, concern around the human dimension has been a consistent part of the work conducted by the GGE since 2017. Over time, this concern has crystallised into the current notion of “context-appropriate human judgment and control” that appears to be acceptable to many delegations in the room.

We understand that delegations continue to hold different understandings about the human dimension. As summarised before, these include whether such a principle is already part of international humanitarian law (IHL), whether it is a means to ensure compliance with IHL, or whether it should be set as a new principle. We would like to focus on what unites these different understandings: there has long been a broad support and consensus among high contracting parties around the importance of maintaining and ensuring human judgment and control. This broad consensus, in our mind, justifies making the human dimension and “context-appropriate human judgment and control” central and prominent across the text—rather than it only appearing in specific parts.

We would also like to offer some more specific remarks with regard to the human dimension on paragraphs 4 A and 4B. These remarks build on insights gained as part of the AutoPractices best practices toolkit for policymakers, developers, and users of AI systems in the military domain, including LAWS.

First, paragraph 4A on testing [refers to “conducting testing and evaluation, including realistic simulations of their use in operational environments”].

Testing plays a particular role in ensuring a sufficient role for human actors. The process of Testing, Evaluation, Verification, and Validation (TEVV) is a set of practices that is meant to, among others, strengthen understanding of how LAWS would function in specific contexts, and how human users would or should respond. Military actors may use various tests, metrics, benchmarks and procedures to conduct TEVV that is informed by the anticipated and realistic contexts of operation and appropriate training data that relates to these contexts. 

As the AutoPractices project highlights, the use of precise metrics for evaluation in various scenarios strengthens the exercise of “context-appropriate human judgement and control”. This practice features continuously testing in conditions that match the foreseen context of use to the extent possible—this matches the phrase “including realistic simulations of their use in operational environments” in 4A. This practice allows testers to evaluate how systems’ performances relate to legal, ethical, and strategic considerations set at earlier stages of the LAWS lifecycle.  

On paragraph 4B, on education [referring to “ensuring appropriate guidance, training and instructions for those in the responsible chain of command and human operators of LAWS”].

Educating humans—including but not limited to, operators and commanders—about LAWS plays a key role in retaining sufficient levels of “context-appropriate human judgment and control”.

Such training is not restricted to digital literacy; it should go beyond providing a basic understanding of the technical characteristics of the systems used. Training should also include education about human-machine teaming—that is how humans interact with LAWS. This is particularly important because it can raise and strengthen humans’ awareness about phenomena such as automation biases and other cognitive biases.

As paragraph 4B highlights, a continuous training process about dynamics of human-machine teaming, as well as how they might affect human judgment and control, is required to mitigate biases and assumptions that are inherent to these dynamics. We would also like to highlight that a sufficient understanding of LAWS and human interactions with them would ideally be balanced in a way to avoid users’ over-confidence in and over-reliance on the technologies. 

Continuous education also aims to provide humans with the necessary information about the potential impact of using LAWS in different contexts, including strategic, operational, legal, ethical, and political implications that might arise. Human personnel that are trained to recognize, assess, and respond to outputs of LAWS while considering technical, legal, security and other aspects have more chances of effectively using options for intervention and reflecting upon, contesting, and rejecting algorithmic outputs, if needed.

Thank you, Distinguished Delegates, for your attention, and thank you, Chair.

Featured image credit: Anna Nadibaidze

Share this article

Related articles

Human - Machine Interaction

Toolkit of Best Practices to Strengthen Human Agency in the Military Domain

Download the best practices toolkit here Routine uses of artificial intelligence (AI) technologies affect military personnel’s capacity to exercise human agency. The impacts of using AI on the exercise of human agency in the military domain should be addressed via bottom-up forms of governance. Such a process starts with the

Read More »
United States of America

Trump’s AI Race Action Plan: Removing Barriers to a Militarized Silicon Valley

Guest post by Tommaso Del Becaro The publication of the long-awaited America’s AI Action Plan by the federal government of the United States in July 2025 represents a crucial juncture in AI governance. Aimed at maintaining the US’ “unquestioned and unchallenged global technological dominance”, the AI Action Plan is the

Read More »