Research Article

Topics: Political Process, Technology, United States of America

The National Security Commission on Artificial Intelligence and the US Policy on AWS

This short contribution addresses the National Security Commission on Artificial Intelligence (NSCAI) report recently published in the United States (US). This report marks an important step in defining the US’ future AI security policy and can be expected to influence the US position on questions relating to the regulation and prohibition of militarised AI. It also explicitly addresses the development of autonomous weapons systems and promotes viewpoints and arguments that require critical discussion.

This blog entry focusses on two aspects: first, the report’s substance and, second, the influence of industry on US federal government policy making using the example of the NSCAI Commission.  

On 1st March 2021, the NSCAI submitted its final report to Congress. The 800-page long report is the output of a high-level commissioner panel that was established in 2018 with the mandate “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States”. The NSCAI’s primary purpose was to inform and advise the federal government—principally the White House, but also Congress—on the US’ “competitiveness” in AI with a focus on security and defence.

The work and recommendations of the Commission have gained attention in the context of the overall debate on the role and regulation of AI, not least because the Commission’s advice can be regarded as influential for future US policy in this area. In particular, the Commission’s findings are directly related to the ongoing debate about so-called autonomous weapons systems (AWS), also often denoted as ‘killer robots’.     

In this regard, the advice of the NSCAI as presented in its final report is important for two reasons:

First, the Commission’s extensive reflections on the meaning and role of AI for society, and for security and defence in particular, are influential in establishing or rather reproducing and stabilising a specific discourse about AI. This discourse is predicated on the position that we have entered a “race for AI supremacy” (p. 7) but also that the “ability of a machine to perceive, evaluate, and act more quickly and accurately than a human represents a competitive advantage in any field—civilian or military” (p. 7). This understanding of the need for, and the benefits of, militarised AI dominates much of the political and public discourse on these technologies. It presents the development of AWS as an inevitable necessity and creates an urgency to invest in their development because it is perceived as a response to a growing threat.

Moreover, the final report does not engage substantially with controversial topics and the existing debate on AI and AWS that, for example, questions the extent to which “meaningful human control” is actually possible. Instead, it endorses the formulation “that appropriate levels of human judgment” (p. 92) must be involved. It remains unclear what “appropriate” and “human judgement” mean in this context and whether the important question of where they start and end in weapons systems is being considered.

The report posits that “authorized by a human commander or operator, properly designed and tested AI enabled and autonomous weapon systems can be used in ways that are consistent with international humanitarian law” (p. 10). At the same time, autonomy and the requirement that “only human beings can authorize employment of nuclear weapons” (p.10) is discussed, which is a point that is rarely mentioned in the international debate. In fact, while this seems not to be of specific concern for the international community that is still debating the very basics of autonomy and human control, this phrasing works to implicitly open the door to the use of autonomy in all other (weapons) systems. This is also underpinned by another recommendation to “develop international standards of practice for the development, testing, and use of AI-enabled and autonomous weapon systems” (p. 10).     

Further, while the Commission concludes that “[t]here is little evidence that U.S. competitors have equivalent rigorous procedures to ensure their AI-enabled and autonomous weapon systems will be responsibly designed and lawfully used”, it also argues that it “does not support a global prohibition of AI-enabled and autonomous weapon systems” (p. 92).

According to the NSCAI, the reasons for this judgement are the absence of a shared definition of autonomy, the difficulties involved with monitoring compliance with a global prohibition of the development of these technologies, and the perception that “[c]ommitments from states such as Russia or China likely would be empty ones” (p. 96). But what this argument leaves out is that international law already includes regulations and prohibitions of weapons systems that are neither clearly defined, easy to monitor, nor considered universally taboo, such as nuclear weapons or blinding laser weapons.

Second, the work of the Commission shows to what extent national governments and government institutions draw on expertise from the business and industry sectors or staff with close links to these sectors, as represented by the fifteen NSCAI Commissioners. The Commission was chaired by Eric Schmidt (former chairman of Google), and vice chaired by Robert Work (former Deputy Secretary of Defense) and comprised inter alia employees of Amazon, Microsoft, Google, and Oracle. Arguably, this underlines a recent development in the defence and security sector. While governments have a long history of seeking external advice, the security dimension was traditionally less influenced by external expertise, apart from lobbying efforts of the weapons industry, for instance.

However, with the commensensical acceptance of an AI race in security, as the country with the highest absolute defence spending, the US government is increasingly seeking input from tech companies. The Department of Defense (DoD)’s “Defense Innovation Unit” that was launched in 2015 to facilitate the adoption of civilian-commercial technology in the US military is headquartered in Mountain View, California, which is part of Silicon Valley. It is also the location of Google’s and Alphabet’s headquarters and of major offices of important tech companies. This marks a reversal of the established dual-use process, in which technological innovations such as the internet were transferred from the military to the civilian sector. The involvement (and later withdrawal) of Google in the DoD’s Project Maven or Microsoft’s collaboration with the US Army to develop HoloLens-based headsets are examples that have attracted some public attention recently.    

In sum, the main argumentation of the final report stands in contrast to critical work on AWS in the past few years. The Commission has largely developed viewpoints on AWS that legitimise and even urge their development based on assumptions that are highly controversial. Critical research and expertise, inter alia, by academics and NGOs remains unconsidered. The report marks the first explicit justification of the development of AWS by a major (or the major) state pursuing such weapons technologies. It can set a significant precedent for future global policies on this issue and can be read as tantamount to a further set-back in attempts by the global community to prohibit AWS.

Share this article

Related articles