Research Article

Topics: Technology, United States of America

Why Investigating Tech Startups in Algorithmic Warfare Matters

On 29 May 2025, the tech startup Anduril Industries and the Big Tech company Meta announced a privately funded collaboration to design and develop extended reality products integrating AI for the US military. This is just one recent example of private tech companies’ growing involvement in supplying the US government with AI-based products for defence and security. The number of Department of Defense (DoD) contracts awarded to tech firms is on the rise, and some company executives, such as Palantir’s Chief Technology Officer Shyam Sankar, have even been appointed as officers and senior advisors in the US Army. The influence of startups, especially those funded by venture capital (VC), has been particularly visible when it comes to developing and supplying technologies grouped under the umbrella term ‘AI’ for defence.

In an article published with Global Policy: Next Generation, I argue that this increased influence calls for further examination of startups’ practices in algorithmic warfare. Practices to investigate include patterns of development and supply of technologies from startups such as Palantir, Anduril, Scale AI, and Shield AI to the US government and other actors.

Just to name a few examples, the firm Palantir has been providing software platforms to the US and its allies for law enforcement, surveillance, intelligence, and defence for many years. It has been supplying AI tools to the Ukrainian armed forces for battlefield targeting, collecting evidence of Russian war crimes, and clearing landmines. In 2024, the company won several contracts from the Pentagon, notably one to develop the Tactical Intelligence Targeting Access Node (TITAN), a targeting decision support system described as “the Army’s first AI-defined vehicle”, and another one to contribute to the Project Maven’s Smart System.

Meanwhile, the startup Anduril provides its Lattice software to agencies including the US Customs and Border Patrol, the UK Home Office, and the US Special Operations Command. In April 2024, Anduril was selected, along with General Atomics, to build autonomous drone prototypes for the next phase of the US Air Force Collaborative Combat Aircraft initiative. It also supplies aerial drones to various states around the globe.

Another key startup is Scale AI, which has received a $14 billion investment from Meta. Former CEO Alexander Wang claimed that the Scale AI-developed Donovan platform was used by “more than 80 government organizations”. Scale AI has won several US governmental contracts and is reportedly supplying the DoD Chief Digital and Artificial Intelligence Office with a framework to evaluate military applications of large language models.

Finally, in 2024, the US Coast Guard awarded the startup Shield AI a $198 million contract to deliver the startup’s V-Bat drone, which integrates its Hivemind software, for ISR, pilot and mission training. The company has also supplied its Nova 2 drone to the Israeli Defense Forces.

With the exception of Palantir, the most prominent defence startups are funded by VC firms, which are monitoring the US government’s interest in defence AI products. As more contracts are awarded to startups funded by VC, this encourages further investment. In 2024, defence tech startups have received approximately $3 billion worth of funding. While the rates of success vary—and supplying AI technologies to the DoD is not always straightforward due to various challenges such as the lack of procurement mechanisms for software—overall, there is a visible increased influence of startups in the US’ defence AI space.

Startups’ development and supply practices contribute to what scholars such as Marijn Hoijtink and Anneroos Planqué-van Hardeveld identify as the ‘platformization’ of security and warfare, or the increased reliance on software and hardware that are “in the hands of only a handful of defence tech companies”. These practices also follow a logic of VC funding, which focuses on short-term, high-risk investment according to the motto of ‘moving fast and breaking things’, or what Elke Schwarz describes as “blitzscaling” in warfare. These trends have implications in terms of normalizing certain visions of algorithmic warfare, especially visions that prioritize fast development of AI in the military domain over regulation.

Startups contribute to this normalization not only via financial and political influence, but also by (re)producing certain discourses in the public space. Their representatives are increasingly present in the media and publish books. Discursive practices performed by representatives of startups are crucial to investigate because the logic guiding startups funded by VC gives them the incentive to engage in hyped discourses to promote their products.

My article in Global Policy offers a preliminary analysis of key themes in these discourses, which include portraying AI as a technological fix to warfare, competing with rivals and deterring them, and the need to reform the US defence acquisition, with an overall sense of urgency to develop AI for defence, overlooking the security, legal and ethical implications. As argued by Ali Rıza Taşkale, such discursive practices boost “the consolidation of private power as national duty” because they promote their own platforms and technologies under the disguise of defence of national security and US ‘values’.

These practices reinforce forms of innovation that prioritize development and experimentation over considering the risks associated with the ongoing militarization of AI. They therefore contribute to rendering ‘appropriate’ a vision of algorithmic warfare where innovation takes primacy over critical characteristics that make warfare a deeply political, complex and messy phenomenon not matching well the logics of how AI technologies function.

As journalist Jonathan Guyer asks, “The Silicon Valley mindset has led to breakthroughs in apps and smartphones. But is the move-fast-and-break-things culture what we want shaping the future of war?” As part of this overarching question, scholarship on algorithmic warfare should pay more critical attention to startups’ practices in defence, both in the US context and beyond.

Featured image credit: Elise Racine & The Bigger Picture / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Share this article

Related articles

All

Three Takeaways from the US Military-Anthropic Dispute

By Anna Nadibaidze and Robin Vanderborght The public dispute that erupted recently between the US Department of War (formerly Department of Defense) and the technology company Anthropic has captured the attention of global media and experts. Anthropic has developed the popular large-language model (LLM) Claude, which is not only used

Read More »
All

AutoPractices at the UN GGE on LAWS in March 2026

The AutoNorms team regularly participates in meetings of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems. The GGE meetings take place in Geneva and bring together delegations of state parties to the UN Convention on Certain Conventional Weapons (CCW),

Read More »