Research Article

Topics: All, China, Technology, United States of America

Can Track II Dialogues be the New “Ping-Pong” Diplomacy to Thaw the Sino-US Relationship on Military AI?

China and the US find themselves increasingly enmeshed in a deteriorating relationship as the two countries contest for primacy across many fields. Both Beijing and Washington view technological leadership, especially an edge in artificial intelligence (AI), as vital to gaining an upper-hand in this intensified power contest. As the two rivals scale up investment in AI research and development, a dangerous trend of integrating AI in a wider range of military applications has been observed. A full-scale arms race in AI weapons is thus potentially on the horizon.

To the disappointment of many observers, the UN process aimed at reaching a multilateral deal to regulate the development and use of lethal autonomous weapons systems (LAWS) has come to a standstill. Statements of the Chinese and US delegations delivered at the UN Group of Governmental Experts (GGE) on LAWS meetings also show opposing views over the sufficiency of international law and the form of regulation: China supports a legal ban of LAWS despite a narrow interpretation of what they constitute; Meanwhile, the US insists on the adequacy of extant international law.

This discord on regulating LAWS, against the background of the deepening strains between the two countries over trade, security and other issues, continues to occupy the research of policy and academic circles. Efforts to resolve differences between the two sides have, sadly, escaped widespread attention in the past few years.

This short article is a preliminary attempt to direct research attention to cooperation instead of competition between China and the US by exploring their Track II dialogues on AI-enabled military systems, organised by the Centre for International Security and Strategy (CISS) of Tsinghua University and the Brookings Institution. It assesses the achievements of this “unofficial” mechanism in light of theoretical literature on Track II diplomacy. In 1971 the accidental encounter of two ping-pong players of China and the US successful broke the decades of deadlock in diplomatic relations of the two countries. Can Track II dialogues become the new “ping-pong” diplomacy that can thaw the Sino-US relationship akin to the thawing observed in the early 1970s?

 

Sino-US Track II dialogues on AI and security

The Track II dialogues on military AI between Tsinghua University and the Brookings Institution were developed from their ongoing conversations and cooperation on various issues. From October 2019 until now, the two sides have held six rounds of talks, with the first two rounds held in person in Beijing and Munich and the last four rounds online. The number of participants has been expanding over the years, consisting of academics and non-governmental security specialists from a proliferating array of institutions other than Tsinghua and Brookings. For example, the Chinese side additionally invited experts from the National University of Defence Technology, Beijing University of Posts and Telecommunications, China Electronics Technology Group Corporation, and China Arms Control and Disarmament Association. The US side was joined by experts from the Berggruen Institute, Minderoo Foundation, Massachusetts Institute of Technology, Stanford University, and the Center for a New American Security (see the list of participants).

The Tsinghua-Brookings dialogues entail efforts to “develop new norms, confidence-building measures and boundaries around acceptable uses of novel technologies”. The conversations are organised around four themes: 1) Off-limits targets (including geographic restrictions on the use of AI-enabled weapons); 2) Proportionality and human oversight (mainly to avoid unintended military escalation); 3) Off-limits data in military use; and 4) The roles China and US should play in international norm-building.

In theory, Track II dialogues deal with issues that may be considered too sensitive to be raised in Track I meetings; and by thinking “outside the box”, can inform decisionmakers of fresh approaches to problems that are at an impasse in interstate deliberations. Have the Tsinghua-Brookings dialogues lived up to these expectations?

The answer is affirmative in at least several respects. Fu Ying and John Allen, leaders of the Chinese and US sides, shared their census on several issues that have been developed in the first three rounds of deliberations in Noema in December 2020. Both sides consider a blanket ban on the development of autonomous weapons as impractical or undesirable. Instead, they see the militarisation of AI as “all but inevitable” and “yet to approach its full potential”. However, both sides share concerns over the potential for AI-enabled systems to hurt civilians and escalate conflicts. They also consider AI-enabled systems as limited due to potential biases in their training data and their inability to reflect human cognitive capabilities, such as intuition, emotion, responsibility and value.

Therefore, they agree that: First, the use of autonomous weapons systems should comport with customary and international law (especially the principles of proportionality and distinction) and both developers and operators should be trained and held responsible for ensuring full compliance with international law; Second, AI systems should operate under appropriate human oversight or control; Third, more research into “explainable AI” and more testing and evaluation regimes are needed; Fourth, it is important to control the proliferation of AI-enabled weapons systems to prevent any malicious use by non-state actors; Fifth, public education should reflect the need for military restraint.

Only sporadic information can be found on the last three rounds of talks, meaning that some details on the content of their agreements are presently missing. However, during the fourth round of dialogue in November 2021, the two sides exchanged their understandings on the use of key terms related to AI systems,  used hypothetical escalation scenarios to anticipate potential conflicts of interest, and developed rules for the use of AI systems to ameliorate these conflicts. They further agreed on the leading role of humans in the development, production, deployment and use of AI and the importance of exercising extra caution in the deployment of AI-enabled systems in high-risk scenarios.

Rounds five and six introduced confidence-building measures (CBMs) to reduce risks against several escalation scenarios. Moreover, Chinese and US experts shared views on the testing and evaluation of AI systems prior to their deployment and provided advice from technical, legal, ethical and regulatory perspectives to make the AI systems robust. These exchanges are intended to help better comprehend each other’s perceptions and identify common norms in the development and use of AI systems.

However, the assessment regarding the Tsinghua-Brookings dialogues is not entirely positive. The theoretical literature views Track II experts as relatively independent of government and thus freer to explore innovative approaches to problems facing those governments. But as we have seen, both Chinese and US groups consist of influential persons who enjoy privileged access to their respective governments. Fu Ying has served as vice-minister of foreign affairs and chairperson of the foreign affairs committee of the National People’s Congress. John Allen is a retired four-star Marine Corps general and former head of NATO forces in Afghanistan. Both Fu and Allen, as well as many other members participating in the dialogues, provide consultancy to government entities. Although these linkages placed the Track II mechanism in a better position to influence policymaking, the need to maintain good relations with government institutions and officials may hamper its potential to go beyond trodden paths and find more effective approaches to the regulation of AI weapons.

This limitation in part can explain why the agreements reached so far are hardly pathbreaking. Many of them are simply a restatement of consensuses reached at the GGE. Some new suggestions on public education, CBMs and exchange of views on key terms are largely practical, low cost, and peripheral to states’ core security concerns. It is also possible that more tangible results that are deemed to be politically sensitive have not been revealed to the public.

Further, the Track II deliberations between Tsinghua and Brookings are short of any specifications on what human control/oversight entails, including its implications at different stages of the life cycle of any AI system. Nor have they successfully addressed measures to curb military build-ups and ease mutual suspicions. These specifications require more transparency on practices of the two sides in the research, development, production, deployment and use of AI weapons. For this kind of transparency to occur, a much higher level of trust between the Chinese and US experts is needed.

 

Concluding remarks

Track II dialogues are important for the regulation of weaponised AI as the technological complexity inherent in the development and deployment of autonomous weapons generates the need for expertise in developing state policies. In so far as Chinese and US epistemic communities develop common understandings of problems and solutions, they may help their respective governments reach more compatible policy positions.

It is probably unrealistic to think Track II dialogues alone can transform the fundamental drivers of the Sino-US relationship from rivalry to cooperation, considering the very different preference structures and some incompatible goals between the two countries. However, the assumption that China and the US will be locked in a “Thucydides trap” to compete for power and status should not be taken for granted. Efforts to nurture some common ground and moderate the intensity of competition should not be simply dismissed. Progress on Track II, especially through the development of a mutually agreed set of rules, regulations and norms governing the use of AI, can contribute meaningfully to reducing the likelihood of a miscalculation or misunderstanding, and maintaining competition at a healthy level that is conducive to human wellbeing.


Featured image credit: Ugurhan Betin via the Global Innovation Policy Centre

Share this article

Related articles

All

Submission on Autonomous Weapon Systems to the UN Secretary General

The following is the AutoNorms project’s response to Resolution 78/241 on “Lethal autonomous weapons systems” adopted by the United Nations (UN) General Assembly on 22 December 2023. The resolution requests the UN Secretary-General to seek views, including those of Member States, civil society, and the scientific community, on “lethal autonomous

Read More »