Submission on Autonomous Weapon Systems to the UN Secretary General

The following is the AutoNorms project’s response to Resolution 78/241 on “Lethal autonomous weapons systems” adopted by the United Nations (UN) General Assembly on 22 December 2023. The resolution requests the UN Secretary-General to seek views, including those of Member States, civil society, and the scientific community, on “lethal autonomous weapons systems, inter alia, on […]

The Imaginaries of Human-Robot Relationships in Chinese Popular Culture

The portrayals of artificial intelligence (AI) and human-robot interactions in popular culture, along with their potential impact on public perceptions of AI and the regulations governing this evolving field, have garnered growing interest. Building on previous studies on public imaginaries of AI in Hollywood movies, particularly focusing on the Terminator franchise, in this essay I investigate […]

‘Traditional Values’: The Russian Leadership’s Narrative about Generative AI

In February 2024, Vladimir Putin approved a new version of Russia’s national artificial intelligence (AI) development strategy, initially adopted in October 2019. One of the updates included is a list of challenges to Russia’s AI development, which mention “the decision to restrict access to AI technologies, caused by unfair competition on the part of unfriendly […]

Japanese ‘Robot Culture’ and the Military Domain: Fact or Fiction?

Imagine that you are sitting in a restaurant somewhere in central Tokyo. You have just ordered your lunch using the tablet provided. Suddenly, coming from the direction of the kitchen, you hear a jingle playing. Next, a little white robot turns the corner and drives in your direction. As it comes closer, you see that […]

AI Summits and Declarations: Symbolism or Substance?

The UK’s AI Safety Summit, held on 1-2 November at Bletchley Park, has generated different types of responses from experts and commentators. Some praise it as a “major diplomatic breakthrough” for Prime Minister Rishi Sunak, especially as he managed to get 28 signatures, including those of China, the EU, and the US, on the Bletchley […]

Loitering Munitions Report Online Launch Event

On 8th December 2023 13.00-14.15 (CET)/12.00-13.15 (GMT), an expert panel (including Laura Bruun, Stockholm International Peace Research Institute) will discuss the major findings of the “Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control” report published earlier this year. You can register to attend this online event here.  Co-authored by Dr. […]

The Creator of New Thinking On AI? Popular Culture, Geopolitics, and Why Stories About Intelligent Machines Matter

Whilst the depiction of weaponised artificial intelligence (AI) technologies in popular culture is often highly inaccurate and dramatized, Hollywood blockbusters provide the starting point from which many members of the public begin to develop their thinking about these technologies. For instance, news articles discussing AI are often accompanied with images of metallic silver skulls with […]

Global Governance of AI in the Military Domain

The AutoNorms team has submitted the short paper below to the United Nations Office of the Secretary General’s Envoy on Technology. In preparation for the first meeting of the Multi-stakeholder High-level Advisory Body on AI, the Office issued a call for papers on global AI governance. AutoNorms’ submission, written by Ingvild Bode, Hendrik Huelss, Anna […]

Five Questions We Often Get Asked About AI in Weapon Systems and Our Answers

By Anna Nadibaidze and Ingvild Bode The ongoing integration of artificial intelligence (AI) and autonomous technologies in weapon systems raises many questions across a variety of fields, including ethics, law, philosophy, and international security. As part of the AutoNorms project, we have contributed to many of these discussions over the past three years, including through […]