In the opening sequence of Top Gun: Maverick, a disgruntled, if visionary, superior officer upbraids Captain Pete Mitchell (yet again!) for insubordination. With the rebuke, Admiral Cain also offers a warning, “The future is coming, Maverick. And you’re not in it.” He has in mind the ongoing rise of the so-called drone, or RPA —Remote Piloted Aircraft. Drones, the admiral gushes, don’t “sleep, eat, take a piss, or disobey orders.” Dubbed the “Drone Ranger” by a detractor, Cain is convinced remote and automated technology will obviate the need for human pilots—avoiding risks both of harm to military personnel and of the cocksure and sometimes inconvenient freethinking of personnel like Maverick.

Despite the opening, contrasting visions of the human future of naval aviation don’t really remain a focus of Maverick, butthe humanity of naval aviators very much does. Maverick reminds us, if only indirectly, that war remains a deeply human experience, whatever the future may hold. To my mind this is a salutary observation for war oughtto remain a human experience. This is because it must remain a moral one.

Of course, Cain is right that RPAs mitigate certain kinds of risks to human pilots. The mission around which Maverick is centered involves a desperate and critical aerial assault to destroy a hardened weapons facility before it becomes “fully operational.” It carries a high likelihood for casualties and a low one for success. For tactical detail, you can just imagine the Trench Run in Star Wars. Super Hornets—replacing the X- and Y-Wing fighters—are required to navigate a dangerously narrow cavern lined with a gauntlet of surface-to-air gun emplacements; all the while eluding both radar detection and enemy aircraft. If they survive the canyon run, they then must conjure up a second miracle and put a missile into the weapon installation’s tiny exhaust port, through which they hope to penetrate to the facility below and destroy it. Unfortunately, successfully doing so will of course cause a bit of commotion, alerting the enemy to their presence. The dogfight expected to ensue will pit the American aviators against enemy aircraft superior in both technology and capability as well as in number. It is, Maverick believes, a mission from which not everyone will return.

It’s also precisely the kind of mission for which the Drone Ranger believes human beings ought to be replaced. Presumably Cain’s concerns include, appropriately enough, the welfare of human pilots. But his motives in this regard aren’t clear. The reasons he gives for preferring RPAs to human pilots—they don’t need lunch or comfort breaks and they do what they’re told—boil down to a concern for accomplishing the mission, not safety. Mission effectiveness and force protection, of course, are neither mutually exclusive nor binary choices. A military commander is rightly responsible for both, even if sometimes compelled to tip the scales in favor of one or the other.

Maverick similarly appears to apportion different levels of commitment to these dual responsibilities. In training a team of fighter pilots to take on the mission, he puts a heavy emphasis on bringing all of his team home, so much so that his supervisor finally intervenes, believing that Maverick is unduly compromising mission effectiveness. Maverick’s preference is understandable, as well as morally defensible. His concern for the welfare of those in his command is deeply, perhaps uniquely, human. At the same time, his own commander’s concern for the mission is also both moral and humane. The just war tradition is grounded on the assumption that there are times when human flourishing is so threatened that a fight must be fought. In those times, those fights must also be won. A commander who does not fight a just war—or launch a just mission—with an intention toward victory is, in most instances, in dereliction of not just military duty but moral duty as well. An observer, therefore, weighing the mission characteristics in Maverick—and in many a real-world mission—might reasonably conclude that the possibility of deploying drones instead of human beings is a gift from heaven allowing a commander to meet the needs of both the mission and the man flies it.

That might be, but it’s rarely ever as simple as that. In a critical moment during the climactic attack, a laser guidance system fails and the human pilot has to sight-check the target. Experience, instinct, and skill succeeds where tech falters. The point here is that technology is great until it isn’t. Just like a good driver on a road trip had better know how to read a map in case GPS breaks down, so too will we continue to have human beings—if held mostly in reserve—ready to scramble if the machines can’t get the job done. There will always be a brave American willing to take the fight to the enemy. If the lights go out, we’ll have to be ready to fight with sticks and stones.

Maverick takes this another step further. It is not true, the film suggests, that when it comes down to one-against-one or force-against-force that better technology always wins. One of Maverick’s continued refrains is that it’s not the plane but the pilot. The person in the box will always matter.

Importantly, it’s also true that combat missions include a third moral consideration in addition to force protection and mission effectiveness. Noncombatant immunity is the requirement that commanders intentionally target only those who pose harm or threat of harm to his own forces or the completion of the mission. Making these distinctions—and making exceptions in targeting despite these distinctions—is, at least at present, a distinctly human task.  

While this concern is not explicit in Maverick, we can see an argument in this regard in Spectre, the fourth installment in Daniel Craig’s tenure as James Bond. M—the head of MI6—has a heated confrontation with his new supervisor over the efficacy of depending heavily on technology—rather than human agents—in intelligence gathering and target selection.

“Have you ever had to kill a man?” M asks his inexperienced adversary. Getting a negative respond, he continues.

To pull that trigger you have to be sure. Yes, you investigate, analyze, assess, [and] target. And then you have to look him in the eye. And you make the call. And all the drones, bugs, cameras, transcripts—all the surveillance in the world can’t tell you what to do next. A license to kill is also a license not to kill.

An implication here is that the cold, mechanical logic of a machine cannot adequately replace the conscience, wisdom, and moral instinct of a huma being. Something like this plays out in the technically faulty but philosophically helpful film Eye in the Sky. A British RPA is surveilling a set of high-value targets in a building in Afghanistan. The targets are preparing to carry out a suicide attack and, if allowed to leave the building, the RPA might lose the opportunity to kill them. The problem is that a little girl is in the target vicinity and will likely be killed in the attack. A human drama plays out as the flight crew tries to sort through the morality of the attack.

This reminds us that remote piloted aircraft are precisely that—remotely piloted. They are not unpiloted—or automated—aircraft. Killer robots might be on the horizon, but they are not arrived in any real sense. So human beings remain in charge. In the American case, an RPA pilot—nowadays flying the Reaper platform—might be 7,000 miles away from their target, but they are also, paradoxically, extremely close psychologically. The optical sensors on Reapers allow an extraordinary degree of detail—typically giving the Reaper pilot a greater degree of situational awareness than those on the ground.

What this means is that while an RPA pilot is physically safe from battlefield harms, they are very much exposed to moral harms. In his forthcoming Is Remote Warfare Moral?, Air Force Lt. Col. Joe Chapa describes a Reaper strike made in early 2015. A high-value Al Qaeda commander was being tracked while walking with a young boy, presumably his son. The Reaper crew, benefiting from the Reaper’s significant loiter time, waited for an opportunity to make the strike without harming the child. Eventually the two parted ways. The crew took the shot and killed the enemy combatant. Following the hit, the crew stayed on the scene to conduct battle damage assessment. They watched as the boy returned to his father’s body. The hellfire missile had not only ended the man’s life but blew him into pieces. As Chapa describes it:

Slowly and methodically, he began to pick up the pieces and put them back together again the shape of his father. I cannot imagine how much painful that experience must have been for the child. The Reaper pilot who had taken the shot and who stayed for the battle damage assessment was forced to imagine it. He was a father with a son about the same age as the boy on the screen. He said, “I can’t watch this,” and asked another pilot to take the controls and left the cockpit.

Moral injury, many readers will already know, is a psychic wound that can occur when one does or allows to be done something that goes against a deeply held moral norm. The shot the Reaper crew took resulted in a “good kill” that met both legal demands and the requirements of just war’s jus ad bellum conditions—it was necessary, proportionate, and discriminate. It ought not to be morally injurious—because the killing in question was morally permissible. It ought not violate any deeply held moral norm because any belief that such a killing is morally wrong ought, simply, not to be held. That said, a distinction is worth making that notes that just because something is not morally injurious, it can still be morally bruising. Watching a child reassemble his father like Humpty Dumpty—a father that you killed—ought to leave some kind of impact trauma. It should bruise you even if does not—and ought not—cripple you.

Of course, there are times when Reaper crews get it wrong. For example, sometimes they are responsible for accidentally killing civilians. When this happens, the visual detail offered by the technology that often makes so much of their job easier, can make the horror of what they’ve accidentally done far more clear.

Admiral Cain is wrong if he believes that his beloved “drones” will utterly eliminate harm to warfighters.

It’s worthwhile here to note that RPAs have emphasized the need for a particular kind of courage among their aircrew. Here, again, I don’t mean the kind of physical courage that is demanded of those on the battlefield. It’s true that RPA crews working 7,000 miles away from the warzone are under no direct threat from the enemy. But, as with the difference between physical and moral harm, there is both physical and moral courage.

Moral courage can be seen in another anecdote shared by Chapa. Here, he relates a scenario in which a Reaper crew was supporting a special operations team on the ground. They were tracking a high-value target and had located him in a particular area. They were now calling for the Reaper crew to strike the target and kill the enemy fighter. The aircrew, however, could clearly see on their video screen that children were playing close enough to the target that they would be caught within the blast radius. The confirmed the presence of children with the operators on the ground. Nevertheless, everybody knew that if they gave up the opportunity to strike the target now, they might never get another chance. The operators on the ground—who had much more skin in the game—demanded that the shot be taken. The pilot, cognizant that not only he but also his aircrew would have to live with the consequences of what they did, asked his team whether they were comfortable with the shot. They were. Nevertheless, the pilot decided that he would wait five minutes. If the children had not moved out of range by then, he would take the shot anyway. He relayed the information to the team on the ground.

As it happened, within the allotted time period the children did, indeed, move out of range. The aircrew took the shot, killing the target. It was a good outcome. Nevertheless, one can appreciate the courage it took for that pilot to delay the shot. He was not in any physical danger. The team on the ground was. You can tweak the scenario in any of a dozen ways and recognize that it might take enormous degrees of moral courage to delay—or refuse to take—a requested shot when doing so might endanger fellow military personnel who are already in harm’s way. It remains unclear the degree to which killer robots will be able to delay carrying out such orders. Again, the man in the box is essential, this time morally so.

At the end of Maverick’s opening confrontation between Pete Mitchell and his drone-bent superior, Maverick is given a grim promise, “The end is inevitable,” the Admiral tells him. “Your kind is headed for extinction.” Maverick considers the notion for a moment, then replies: “Maybe so, sir. But not today.”

Back in 1984, many a thirteen-year-old watched the first Top Gun and began to dream of flying planes off ships. Some of them have since done so, emulating their onscreen heroes. To be sure, despite the Admiral Cain’s of the world, a young boy—or girl—watching Top Gun: Maverick can have every confidence, should they so dream it, that they too can be in a manned fighter plane someday. For how many generations this will remain true is not knowable. But it is true for now. For now, and into the seeable future, war remains a human experience.

And therefore it remains a moral one.