Research Article

Topics: Technology

The New Fog of War: Algorithms, Computer Vision, and Weapon Systems

War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty. A sensitive and discriminating judgment is called for; a skilled intelligence to scent out the truth.
— Carl von Clausewitz (1832) [1873]

The visual dimension is one of the key elements of warfare that is important to, but also clearly impacted by, technological developments. It is apparent in policy discourse that the augmentation of sight in terms of military concepts such as situational awareness is considered as a field where technology such as AI, or machine learning more specifically, is expected to provide an edge over ‘adversaries’. While comparatively simple tools such as mono- or binoculars were used in the past to remedy the limitations of the human eye to see what others cannot see (or to remain unseen), advances in computer vision promise to fundamentally alter our “scopic regimes” beyond what, for example, is possible with night-vision devices.

Technological developments are embedded in an influential narrative about technology finally providing the “fix” that lifts the centuries-old “fog of war” – an expression, coined by von Clausewitz in “On War”, that symbolizes the challenges to situational awareness in a practical and metaphorical way on the battlefield. The “fog of war” is part of an imagination of visual omnipotence – or omniscience – represented by references to the all-seeing eye of God or mighty mythological figures, whose names are given to military systems such as the “Gorgon Stare”. A significant body of research in the past two decades has addressed the implications of the “drone’s gaze”, which is basically about exploring what electronically produced and processed imagery mean for the use of force.   

In this sense, the complexity of human-machine interaction and how human perception is influenced and becomes dependent on limited electronic data output and interpretation is not a recent phenomenon. Examples of air defence systems, for instance, show how moving to a sensory dimension outside of the human’s eyes range can lead to fatal consequences if perception and interpretation of electronic data do not match the situation. Fratricides involving Patriot batteries or the downing of civilian airliners over Ukraine and Iran are tragic examples of this dynamic, as comprehensively argued by AutoNorms researchers in a recent report.

However, the coupling of computer vision and powerful algorithms in terms of machine learning further removes human agency from the operational context. The political debate about lethal autonomous weapon systems (LAWS) at the Convention on Certain Conventional Weapons at the UN in Geneva is often concerned with the complete absence of human control. But automatic image recognition that contributes to the pre-selection and filtering of information can make meaningful human control virtually impossible. One of the main arguments of why there is a military necessity to introduce systems that are increasingly involved in the decision-making process is the military valuation of speed and agility. Arguably, it is of huge tactical (and strategic) advantage if higher and higher data volumes can be processed in less and less time whilst responses become more immediate, and accuracy is maintained.   

The promises and imagination of the, predominantly US, tech industry further fuels the narrative about the unlimited opportunities of AI to offer a technological fix for the pertinent military-strategic and tactical problems of the present. Palmer Luckey, co-founder of Anduril Industries and one of the most vocal representatives of this group of actors, emphasised in an interview in 2019 that he thinks “soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is”.

While this narrative can serve political, military, and financial interests, it overlooks the serious limitations of some of these associated technologies. In the remainder of this post, I present a brief discussion of some of the serious challenges to image recognition that undermine the imaginaries of technologically enabled omniscience on the battlefield.

Research on computer vision, image recognition, and machine learning focusses increasingly on so-called “adversarial examples” (AE). AEs are artificial perturbations to images. In general, two types of AE can be differentiated: digital AE are perturbations to images that add “noise” to the pixel of an image. These alterations are imperceptible to the human eye but can provoke the misclassification or misdetection of objects shown in an image.

Physical AEs change the physical environment within the vision field of the sensor input of a computer vision system. These perturbations are physically added to the objects as patches or similar structures. For example, in 2018, researchers generated a sticker that was added to the object to be classified – in this case a banana. The image below shows the patch, and also that the classifier rated the upper image with very high confidence as a “banana”. Adding the patch made the classifier rating the lower image with very high confidence as a “toaster”.

Source: Brown et al. 2018, p. 2.

Other examples of physical attacks include patches that are added to road signs and cause autonomous driving applications to malfunction or 3D-printed physical objects that are designed in a way that provokes a robust misclassification of image recognition software. While the malfunction and fatal accidents of autonomous driving solutions have raised public attention in recent years and put the reliability of autonomous features into question, possible deliberate attacks using adversarial examples as outlined above pose an additional, significant risk to public safety.

In the security domain, the US military has started to address challenges posed by AE. In 2019, the US DoD released a funding opportunity call for the creation of the “Guaranteeing AI Robustness against Deception (GARD)” programme, running for 48 months. GARD has three primary objectives: “1. Create a sound theoretical foundation for defensible AI. 2. Develop principled, general defense algorithms. 3. Produce and apply a scenario-based evaluation framework to characterize which defense is most effective in a particular situation, given available resources. GARD defenses will be evaluated using realistic scenarios and large datasets” (DARPA, 2019: 6)

Under the radar of debates about fully autonomous systems portrayed as “killer robots” often linked to popular fiction imaginaries such as the “Terminator”, a new digital fog is ascending that seems to be even denser and more decapacitating than the lack of situational awareness as well as the gunpowder smoke and troops’ mass movement on the battlefields of Napoleonic wars, which were the formative experience of von Clausewitz. As Will Knight of MIT Technology Review argues “[e]ven as people worry about intelligent killer robots, perhaps a bigger near-term risk is an algorithmic fog of war one that even the smartest machines cannot peer through”.

Featured image credit: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

Share this article

Related articles

AutoNorms

An international research project examining weaponised artificial intelligence, norms, and order​

Research Themes

Recent Articles

Weapons Systems Data