Long before James Cameron’s Terminator and Stanley Kubrick’s H.A.L. 9000, there was Prometheus. It was the titan Prometheus who held a soft spot in his heart for us mere mortals and stole for us fire from the gods—and with it, the technical skills to put it to use. That is the story the Greek poet, Hesiod, injected into western culture 2,600 years ago. But, Hesiod tells us, Prometheus’s plan to make the humans more capable was not without consequences. As punishment for Prometheus’s tricks, Zeus dispatched Pandora and her infamous jar of calamities eventually to unleash every plague and pestilence upon the world. The theme of playing God resulting in horrible unintended consequences runs through the Western canon, including the Golem of Jewish mythology, a clay creature brought to life by magic prone to violent rampages, and Mary Shelley’s Frankenstein

Suffice it to say, concerns that the powerful things we create will lead to our own destruction are as old as they are real. Today, amid broad concerns that artificial intelligence (AI) will exceed the control of its creators, there is a more specific concern about AI-enabled military systems. How can we be sure that AI-enabled autonomous weapons will not produce unintended calamities? These concerns are legitimate and Defense Department officials should respond to them with care. 

I have lost count of the number panels I’ve participated in or conferences I’ve attended in which a DoD official—either in or out of uniform—has reassured audiences concerned about the prospect of AI-enabled systems making targeting decisions by offering some variation on: “don’t worry, it’s DoD policy that we’ll always have a human in-the-loop.” 

The problem, of course, is that this is not now, nor has it ever been, DoD policy. 

I can understand the appeal of the “human in-the-loop” framing. People are worried about AI-enabled weapons systems and particularly the prospect of autonomous weapons run amok. But requiring a “human in-the-loop” is not the right constraint against such a future. 

When technical professionals—both inside and outside of the Pentagon—talk about artificial intelligence, they are generally not talking about some yet-to-be achieved human-like cognition. They do not have in mind artificial general intelligence. Instead, they are referring to a family of computer science techniques that have been developed over the last few decades. In the 2020s, generally, professionals most often use “AI” to refer to the deep learning statistical prediction techniques that have exploded since 2012—an explosion made possible by the combination of parallel processing in graphic processing units developed originally for the gaming industry and ubiquitous “big data” available through the internet. 

Given this technological background, when Pentagon officials refer to AI-enabled lethal autonomous weapons systems, they’re usually talking about employing statistical predictions techniques like deep learning to enable combat effects. 

Given the fears about Terminators and our creations turning on us, the “human in-the-loop” framing is understandable, but this language muddles more than it clarifies. The “loop” language is, at best, ambiguous and, at worst, misleading. 

The “in the loop” language is ambiguous in that many who use it fail to define precisely which loop is under consideration. Think of what we now consider to be “legacy” weapons technology: the advanced medium range air-to-air missile (AMRAAM)—a weapon that entered service in 1991. When US Air Force fighter pilots employ this missile, they first use their aircraft’s radar to identify an airborne “track” and determine through various means that the track is “hostile.” The pilot releases the missile and then the missile takes over. It flies to a pre-determined volume of space and emits energy with its own onboard radar. It picks up the “track” and then it calculates the best intercept geometry to target and destroy the hostile track. 

Is the AMRAAM a lethal autonomous weapons system? Is there a human in the loop? Well, it depends entirely upon where we draw the loop. If we are referring to the AMRAAM’s ability to choose the best course to intercept the track, then it is a human out-of-the-loop system. But if we draw the loop so that it includes the human pilot’s decision to release the weapon (and to target the track), then it is a human in-the-loop system. Whether the “human in the loop” language can serve as a frame for thinking about the ethics of autonomous weapons systems depends entirely upon which loop is at stake. And on this question, many who use the “in the loop” language are silent. 

In general, unless we are careful to define which loop we have in mind, saying that some system is “human in-the-loop” or “on-the-loop” or “out-of-the-loop” is deeply ambiguous. 

But there is another, more fundamental, reason to reject the “in-the-loop” language. Linguistically, defining the human’s role in relation to the machine control loop inherently prioritizes the machine’s role rather than the human’s. It suggests that there is some preexisting task that the machine will perform and then we can define the human’s role relative to that preexisting machine role. But this is the wrong prioritization. 

If those who paraphrase Clausewitz are right that war is a human endeavor, we would do well to define first the task of the human and then to ask where, in relation to the human task, is the machine best positioned to help. (Several co-authors and I have made this argument at greater length elsewhere). 

Given these arguments for avoiding the “in-the-loop” language, it makes some sense that the DoD has not committed itself always to having a human in-the-loop in autonomous weapons systems.  

There is one exception. In all of DoD strategy and policy, there is only one mention of a human in-the-loop. The 2022 Nuclear Posture Review states: “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decision by the President to initiate and terminate nuclear weapon employment.” We will always have a human in the loop when it comes to nuclear weapons

If DoD policy doesn’t commit us to having a human in the loop for non-nuclear weapons, then what does it commit us to? 

DOD Directive 3000.09 “Autonomy in Weapons Systems,” defines an autonomous weapons system as “a weapons system that, once activated, can select and engage targets without further intervention by an operator.” It requires that any autonomous weapon developed within the DoD “will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” 

Though what, exactly, counts as “appropriate levels of human judgment” is not specifically defined, the tri-chair panel of senior leaders who must make the determination is defined. DODD 3000.09 requires that autonomous weapons undergo a review twice in the system lifecycle, once before formal development and again before fielding. The senior leader review panel consists of the Undersecretary of Defense for Policy (USD(P)); the Undersecretary of Defense for Research and Engineering (USD(R&E)), and the Vice Chairman of the Joint Chiefs of Staff (VCJCS). 

This review process is not the only backstop to prevent the irresponsible development or employment of autonomous systems, it is just the policy that is specific to autonomous systems. As DODD 3000.09 makes clear, those who authorize or employ autonomous weapons systems are still responsible for ensuring compliance with the “law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.” 

Lethal autonomous weapons systems admittedly impose novel risks. But all new weapons do that. The question for those working in military AI and in military ethics to answer is: how do we develop novel weapons systems that enable commanders and operators to submit to the laws of war? And this is a point 3000.09 makes explicitly.