Research Article

Topics: All, Russia

Russia’s Perspective on Human Control and Autonomous Weapons: Is the Official Discourse Changing?

For many years, Russia’s Ministry of Defence (MoD) has been developing and testing different weapons, vehicles and systems using Artificial Intelligence (AI). This has been with the goal of modernising the Russian Armed Forces by gradually removing humans from military tasks.

The Russian leadership, both political and military, has a visibly strong interest for weapons with autonomous features. On May 31, the MoD announced the creation of a special AI department with its own budget, which would begin its work in December 2021.

What remains unclear, however, is the Russian position on the role of humans in a weapon system’s decision-making process, especially when it comes to critical functions like selecting and attacking a target.

In the international debate about Lethal Autonomous Weapons Systems (LAWS), Russia opposes a ban on LAWS and limitations on the development of weaponised AI. Officially, the Russian delegation at the United Nations’ Group of Governmental Experts on LAWS argues that human control over the operation of [LAWS] is “an important limiting factor”, but that “specific forms and methods of human control should remain at the discretion of States”.

However, when observing recent discourse from Russian officials as well as media reports, it seems that the focus is shifting towards weapons that are specifically capable of operating completely on their own, without humans remaining ‘in the loop’ of the decision-making process.

At the educational “New Knowledge” forum held across Russia in May, Defence Minister Sergei Shoigu said that Russia has already begun producing “weapons of the future” such as combat robots that are “capable of fighting on their own”, just like those shown in science-fiction movies.

Earlier in May, a high-ranking military source told the news site Gazeta.Ru about an ongoing project to create swarms of unmanned maritime vehicles controlled by AI which would be used for combat both underwater and on the surface. The source said that the one country that manages to develop such technologies will receive “undeniable advantages in the conduct of armed combat at sea” and will “without any exaggeration, become the lord of the oceans”.

Human control of autonomous weapons: Expectations vs reality

At the same time, part of the military and political leadership still does not trust AI enough to take important decisions on its own. There is some visible scepticism towards machines and their ability to take such decisions without human oversight. It remains unclear whether or how Russia is planning to use the “weapons of the future” that Shoigu mentioned.

Military experts and representatives of entities producing and researching weaponised AI in Russia, such as the Kalashnikov Concern or Tecmash, like to point out in interviews that even if their robotic systems can go into full automatic mode, the last word on using force remains with the human operator.

Looking at the discourse of the top leadership, there is a similar dilemma. President Putin is visibly interested in the strategic benefits of modernising the Armed Forces, while being wary of AI and technology more generally. In December 2020, he said that weapons using AI are one of the priorities for the Russian military, and that such weapons “in the near future will largely determine the outcome of a battle”.

At another event in December, the “AI Journey 2020”, Putin argued that AI will “never replace humans,” adding, “machines will control humans to a large extent…but a person must ultimately control these machines.” When asked whether AI could become president, he noted that AI has no “heart, no soul, no feelings of compassion and conscience”, but that it could be a “good assistant” and even a “teacher for anyone, including the head of state”. In sum: AI can be useful – especially in the military sphere – but cannot replace human qualities.

Russia’s discussions about LAWS are guided by its observations and perceptions of the actions and discourses of other states. As pointed out by a recent CNA report, the Russian leadership is closely observing developments in the US, where there is also pressure to pursue weaponised AI and diminish the human role in warfare. The emphasis on fully autonomous weapons that has featured in Russian discourse lately could therefore be a response to how the issue of LAWS is discussed in the US. In its turn, the US might be doing the same depending on its perception of what states like China and Russia are doing.

Ultimately, it seems that Russian and US leaders face the same questions about the development of autonomous weapons. Once one side takes action, the other tends to follow thereby sustaining a vicious cycle of competition.

Russia’s debates about LAWS and human control in using force show that Russian elites are paying a lot of attention to developments in the US. However, unlike what some Russian officials argue, the development of fully autonomous weapons systems is not an inevitable process, and the rhetoric coming from some policymakers in recent weeks does not mean that Russia will be abandoning its official position on human control over the use of force.

Featured image credit: A. Savin via WikiCommons.

Share this article

Related articles

All

Submission on Autonomous Weapon Systems to the UN Secretary General

The following is the AutoNorms project’s response to Resolution 78/241 on “Lethal autonomous weapons systems” adopted by the United Nations (UN) General Assembly on 22 December 2023. The resolution requests the UN Secretary-General to seek views, including those of Member States, civil society, and the scientific community, on “lethal autonomous

Read More »