Mark Tooley: We’re grateful that Joe Chapa is going to speak to us a little more briefly than some of our other speakers, without any preparation. However, because he’s always, as a U.S. Air Force officer, prepared to speak and to serve, he is a longtime friend of Providence. He teaches at a nearby Marine school as a U.S. Air Force officer, as I said, and wrote an important book on the morality of remote warfare—a very timely topic. Joe, are you in the room? There you are. So, Joe, thank you for standing by and being available for us.  

Joe Chapa: My pleasure. Thank you for the opportunity, impromptu though it may be.  

As you heard, I’m a U.S. Air Force officer. It’s important for me to let you know that the views I express here are my own and don’t necessarily reflect the Department of Defense or the Department of the Air Force. Though I hasten to add, there’s something pretty special about the United States military, in that it empowers people like me to come out and share my views with you—and whoever else will listen—without any constraints. Not every country in the world treats its military officers that way.  

So, we don’t have that much time, but I do want to try to sort of tie a bow around some of the themes that we’ve heard about today. Just by way of background, I flew airplanes a long time ago. I went to grad school a couple of times for philosophy, where I focused on just war theory. Most recently, I served as the Department of the Air Force’s Chief Responsible AI Ethics Officer, so I’ve thought a lot about the relationship between just war theory and this sort of new age of AI and autonomy that we seem to be entering.  

Over the last couple of days, you’ve heard a lot about just war principles, a fair amount about technology and technology development, and a lot about great power competition between the United States and China. I’m going to try to tie those three things together, if I can.  

But before I do that—did you hear about the large language model that was trained exclusively on Air Force pilot communications? Yeah, it seemed like a good idea at the time, but then, when they rolled it out, it wouldn’t stop talking about itself.  

Okay, so this—we have reached the audience participation part of the day. This is the quiz. I don’t know if you knew a quiz was coming, but it’s the kind of quiz that you can collaborate on with your neighbors. So just shout it out: When we talk about the just war tradition there are either two or three categories of principles that we talk about: one governing the decision to go to war, one governing the conduct within war, and then, some people think, a third category for the rules after war. What are those three categories called? Somebody shout it out.  

It’s in Latin.  

Audience Member: The jab and the jib.

Chapa: Jus ad bellum, the jab; jus in bello, and then some people say jus post bellum for after war. Someday, Mark LiVecche and Paul Miller will have a cage match, and they’ll decide once and for all whether there’s a third category or whether jus post bellum gets rolled up into jus ad bellum.  

Okay, in that middle category—the jib, the jus in bello—there are three—Mark LiVecche says three—principles that fall under that category. Somebody shout it out.  

In the conduct of war, military members are required to do three things under the jus in bello. One has to do with non-combatants.  

In the back.  

Audience Member: Necessity, proportionality, discrimination.  

Chapa: Very nice. I’m just going to swap the order because this is how I think about it: discrimination, proportionality, and necessity. Dr. LiVecche talked about adding necessity back in. That’s important to me also, so I’m going to talk about all three.

So, discrimination requires that I distinguish between combatants and non-combatants, and I target only the former and never the latter. Proportionality says that the good to be achieved by the considered military option has to outweigh the bad that it will cause. There are different ways people sort of understand the good and bad there, but realize that’s a balancing test between the good I’m going to achieve and the harm I’m going to cause.  

And then necessity—again, there’s some disagreement here—but necessity is about the minimal harm. Am I choosing the option that will cause the minimal harm? Is it necessary for something? Do I have to do it this way, or is there some better way I could achieve the objective?

So now, in light of those really longstanding principles that we can trace all the way back to Aquinas—and then, some people say, even earlier than that to the trifecta of Aristotle, Cicero, and Augustine—given those principles, I’m sure you’ve heard there’s a lot going on in the development of artificial intelligence and autonomy.

If you’re reading any headlines about things the Department of Defense cares about, or the British Ministry of Defense cares about, or the People’s Republic of China cares about, all of these military organizations are trying to figure out: how do I leverage artificial intelligence in order to help me achieve military objectives?  

I hope one of the things you’re wondering is, “Wait a second, if I’m going to employ AI-enabled autonomy, how could I possibly meet these demands of discrimination, proportionality, and necessity?”  

So, what I want to tee up for you is a couple of ways to think about how we might be able to meet those just war principles under the jus in bello, even though we might employ AI-enabled autonomous weapon systems.  

As you might suspect, there’s a spectrum—a range of views here. Right on one extreme end, some people say, “Hey, we’re in great power competition mode, and we’ve just got to throw out the rulebook. Those principles just don’t apply anymore because, by golly, we need the AI.”

I hope you will not be surprised to know that that’s not my view. On the other side, though, there are people who say, “Well, we just can’t pursue AI because it would be in violation of those principles, so we’re just going to have to cede that ground to our strategic competitors.”

I hope you will also not be surprised to hear that that’s not my view. So, what is the third way? What’s the middle way? That’s what I want to propose.

Okay, so I’m going to give you three sort of operational scenarios, and again, audience participation—I want you to tell me if, in that scenario, you think we have a discrimination problem, or you think we have a proportionality problem. Then I’ll save necessity and talk about that at the end.

When I say we have a discrimination problem, what I’m really asking is: do we think that technology can solve the discrimination problem in this operational scenario, or not? Do we have to make sure that we put humans there instead of machines?

So, here’s scenario number one: I’m at war with a near-peer competitor. They have a capable military. I have a capable military. I have airplanes on one side and airplanes on the other side, and they would like to shoot each other down. If I put an autonomous airplane out there that wants to be able to distinguish between combatants and non-combatants, and then target only the former and not the latter, do you think technology can solve that problem now or in the near future?

I’m seeing a lot of up-and-down head nods. I agree with you. In fact, we’ve been able to do that for a long time. For decades, U.S. aircrews have been able to identify adversary aircraft beyond visual range—meaning not with the human eyeball, but with some beeps and squeaks and some other technological means.  

Okay, do I have a proportionality problem? Remember, proportionality is about weighing the good I’m going to achieve—which is shooting down an airplane that I have a justification for shooting down; that’s the good thing. I might inadvertently cause some bad thing; that would be the other side of that balancing test for proportionality.

Do you think in this air-to-air environment I have a proportionality problem?  

I’m seeing about maybe a quarter of the people saying left-right. I didn’t see any up-down head nods. Again, I’m inclined to agree with you because think about the way that proportionality problem would have to arise.

We already said I can solve the discrimination problem, so I’ve successfully identified that enemy aircraft. I’m now going to shoot some munition at it—a missile, or maybe it’s a ray gun, or a laser, or something in the future, who knows, right? And I’m going to actually hit the target, but then somehow inadvertently cause harm to civilians, either loss of civilian life or damage to civilian property.

It’s possible, but it seems very unlikely, right? Because the odds that some civilian entity is there in close proximity to this enemy aircraft during an air war is very unlikely. Are you with me?  

So, right now, in that first scenario, I think we’ve solved discrimination, and I think proportionality is going to take care of itself. That’s not a very high-consequence situation from a proportionality calculus perspective.  

All right, here’s scenario number two: boats. I work with the Marines—I’m supposed to call them ships now, I’m told—ships. I’ve got ships on the surface of the ocean, and they want to shoot; they want to sink each other, right? Friendly ships, enemy ships.

Same question: do you think I could use technological means to solve the discrimination problem?

Probably for the same reason, right? It’s a big piece of metal, and I can use technology to figure out which big pieces of metal are military pieces of metal and which pieces of metal are civilian. So, I can probably distinguish between the aircraft carrier and the cruise ship using technological means.

Do I have a proportionality problem if I want to sink ships? Again, it seems very unlikely to me, right? It seems like the only way that would happen is if the aircraft carrier that’s involved in an active, hot shooting war is parked next to the Disney cruise ship.

It’s possible—you can imagine scenarios, right? Like, we’re striking ships in the harbor or something, and there are civilian ships in the harbor. That’s possible, but it seems unlikely.

Now, I’m going to problematize this a little bit further. Suppose we had an adversary that wanted to use a fleet of, let’s say, civilian fishing vessels for a military purpose. Could you imagine an adversary of the United States doing that? I certainly could.

Now I have a harder problem from a technology perspective. Suppose I see—I don’t see, but my machines are able to perceive—ships on the surface of the ocean. Some of them look very clearly like military targets, like an aircraft carrier or a destroyer. Others look like fishing vessels. And I, the human in charge of this whole operation, know that this adversary sometimes uses civilian fishing vessels for military purposes.

Now, do I have a discrimination problem?

It seems like I do, right? So, I’ll come back to what that might mean. But what I’m trying to do here is show you that whether or not I have a discrimination problem depends a little bit on the adversary and the choices that he or she makes.

Okay, last one. Let’s say I want to engage in a land war. So, I’ll use an airplane—because I’m an air power guy. I’ve got an airplane, and I’ve got a ground target in enemy territory that I need to destroy. It’s a tank or whatever.

Do I have a discrimination problem?

People seem less enthusiastic. I saw some headshakes—no—but people seem less enthusiastic about that, and I hear you. It’s possible, I think, that we could solve that discrimination problem, but realize it’s going to be harder.

In the first case I gave you, we’re using radar or some other sensor in a big blue sky. The things that are present in that picture are just a small set of things; there aren’t many things that are going to be there. There are going to be airplanes, and that’s kind of it. Now, I need to distinguish between combat airplanes, civilian airplanes, and friendly airplanes, right? But the set of things in that field of view is just not that many.

When I start looking at the ground, the set of things that are going to pop on my radar goes way up. I’m going to see civilian structures. I’m going to see railroads. I’m going to see houses. I’m going to see just returns from the terrain. So, starting to filter through that gets harder. It’s not impossible, but it gets harder for the technology.

Do I have a proportionality problem?

The lack of nods or shakes either way is a good sign because it’s entirely context-dependent, right? Whether I have a proportionality problem depends on what’s happening on the ground at that moment.

If, like some of our adversaries we’ve fought in the past, they want to park their military assets in schoolyards to create a proportionality problem for me, then you better believe I’m going to have a proportionality problem. Striking that tank might mean inadvertently harming the kids in the school, right?  

If I’m looking at fielded forces that are dozens or hundreds of miles away from the nearest civilian, my proportionality problem gets a lot easier, right? So, that’s really going to be context-dependent.  

Okay, so those are the three scenarios. I hope what you’re getting from this is that it’s at least plausible that, in some contexts, we could use technology to solve the discrimination problem, and we can choose ways, methods, or places of employment that make the proportionality problem a lot easier.  

So, there are a couple of takeaways that we can mention, but first, let me say something about necessity. In my interpretation—and people disagree about this—necessity is sometimes called the minimal harm requirement, right?

Remember, proportionality is about balancing. Suppose I’ve got two courses of action as a military commander. One is going to achieve my objective—a high-value target, very, very important—and it’s going to kill one civilian. That probably meets the proportionality test. My other course of action is going to kill the same high-value target—very, very important—but it’s going to cause two civilian deaths.

Proportionality actually can’t tell me which option to choose because they both meet the proportionality calculus. Proportionality is just a weighing that says, “Yes, the good to be achieved is greater than the harm.” So, I need another principle: the necessity principle, which says, “Hey, cause the least amount of harm possible.”

Okay, that’s the principle that says I need to choose that first course of action that’s only going to harm one civilian.

Here’s where things get sticky for AI and autonomy. There are people publishing all the time—every day—saying, “We need autonomy. We need AI. We need autonomy because we have to be able to deter China and potentially defeat China.”

I understand that argument, right? But if I go procure those systems on the grounds that I need them to deter China, what’s to prevent me from using them in low-intensity conflict in the U.S. Central Command area of operations or the U.S. AFRICOM area of operations?

Do you see what I’m saying? The question we have to ask ourselves is: necessary for what? And then, in my view—just my opinion—we, as a national security apparatus, have to have the self-discipline to say, “We are only going to employ those systems in the operational environment that met the necessity condition in the first place.”

That’s not easy. And so, there is some work that we have to do in the national security apparatus to ensure that whatever we said the necessity argument was, that’s the place we limit these systems to.

Okay, so in conclusion, I think we can just highlight a couple of things.  

One, it’s at least possible to meet the just war requirements under jus in bello using these kinds of tools. It’s at least possible in some contexts. That implies to me that Western states, such as the United States, should perhaps focus their efforts on testing, experimentation, and development of this technology in those contexts where we know proportionality is easier and, potentially, where we know discrimination is easier. That’s a good place to start to learn lessons on how to employ these systems.

Secondly, there are going to be hard cases, like the fishing boats, right? No one is suggesting—I have never read a credible author who says—that when we’re talking about autonomy and AI, we’re talking about completely removing all of the humans and using only machines.

The question within the military environment is: how do I get the best performance out of this combination of machines and humans?

If I know going in that the adversary is going to use civilian infrastructure or civilian vehicles or whatever for a military purpose, I need to kind of put that problem on the shelf and say, “I probably need humans to solve that problem.”

That doesn’t mean there aren’t any problems the machines can solve, right? The machines can still distinguish between an aircraft carrier and a Disney cruise ship, right? So, as military planners, I think part of the duty is going to be to say, “Where do I put my autonomy, and where do I put my human judgment, in order to maximize the performance of that team?”

And then, finally, as I mentioned, necessity. We have to ask: necessary for what? And then, we have to have the self-discipline to employ those systems in the thing that we said they were necessary for.

I’m sure that I’m over time. I’ll leave it up to Mark Tooley as to whether we have time for questions.

Q&A

Mark Tooley: Yes, any questions?  

Question: McGregor Langston, Regent University. I just have a question about when you’re talking about the future of warfare. People—at first, scholars—talk about the nature of warfare versus the character of warfare. The nature of warfare doesn’t really change much over thousands of years. It’s kind of always the same regardless. The character of war is always in flux; it’s always changing, and it’s based on technology, strategy, etc.  

Some scholars are now arguing that AI is changing the very nature of war. So, do you think that AI could be a nature-of-war-changing thing, or do you think it’s just another change in the character of war?  

Answer: Yeah, great question. So, the short answer is: I do not think that AI represents a change in the nature of war.  

Your distinction between nature and character is exactly the way people talk about it. People cite Clausewitz—that war is a human endeavor. I think all of that’s true. I think the only way you could get close to saying that AI threatens to change the nature of war is by taking the humans out of it entirely, and I just can’t—I literally cannot—imagine that happening.

As a sort of proximate example, if you look at the technology development that’s happening in Ukraine and in Russia, they’re experiencing at a rapid pace the same two-step dance that we always experience in conflict: someone comes up with a technological solution to an operational problem, and they win the day—but just the day—because then the adversary comes up with a response to that, and then team one has to come up with a response to that.  

That back-and-forth is going to shape the future of technology development in war. It’s not going to be a situation in which whoever gets to AI first—whatever that might mean—then has a panacea solution for warfare, right?

At the end of the day, those technological advances will be mitigated by technological advances on the other side. Ultimately, human decision-makers are going to have to make tough decisions in the fog and friction of war, and that’s been the nature of war since forever. Good question.  

Question: JJ from Taylor University. I have a question—something that I know I and other people have kind of worked through—what are the effects on the individual human beings who are using these AI systems to take human life.

If you really can sink an aircraft carrier without having to be out in the field and get your hands dirty, is that just making you apathetic to humanity? Does that take away your compassion? And as a Christian, what do you think of those sorts of movements toward kind of autonomous warfare?  

Question: Yeah, so the sort of practical question as to what effect AI or autonomous systems will have on the human operator—I don’t think anybody knows.

The reason I’m pretty confident that nobody knows is because I’ve written a lot on the use of remotely piloted aircraft. In that business, early on, outsiders had a very strong sense that because those people—my friends and I—were 7,000 miles distant from the weapons effects, they would necessarily feel psychologically disconnected from the work that they’re doing. That would drive apathy, dehumanization, all of those things, right?

This was famously put into a paper by Philip Alston, who was the Special Rapporteur on Extrajudicial Killings at the UN. He coined this term “PlayStation mentality” in about 2010. Shortly thereafter, the Air Force funded a bunch of empirical psychological studies of those crews and found that actually, that’s not true at all.  

The rates of, for instance, high risk for post-traumatic stress are exactly the same in that warfighting community as they are in traditionally piloted aircraft—fighters, bombers, and things like that.

So, our assumptions about what it would be like psychologically to do that work remotely were just—they were just wrong. It took time and empirical study to figure out what the right answer was.

So, I’m with you in the sense that I have an intuition that if we put the human further up the causal chain—not physically distant, like we did with remote warfare, but further back in the causal chain—that person will not be as intimately connected with the harm that they’re causing. But I don’t know what that means. I don’t know if that’s going to have downstream effects on their psychological investment, on their emotional health, all of that stuff, right?

I am worried about the Ender’s Game problem, where you could just sit in your sort of virtual Tactical Operations Center and plink targets without recognizing that those are humans on the other end, right? But since we’ve been wrong in the past, I’m hesitant to take a strong view of that.

I do have very strong views about the duty that we—I’ll say “we” as an old guy—the duty that we more senior people in the military have to our subordinate young people who join the military. We have a duty to take care not just of their technological competence, not just of their tactical prowess, not even just of their physical bodies when they’re in uniform and then separated or retired, but also of their ability to flourish as humans.

The last thing I would want to do is go down a technological road that puts us in a position where maybe I’m putting young airmen or young lieutenants in a situation where they’re maybe not cultivating virtue—maybe even cultivating vice—for the reasons that you mentioned, right?

As a Christian, I tend to be fairly Aristotelian in my understanding of virtue and flourishing, and that’s where I would have concerns. But again, since we don’t know what the emotional and psychological reaction is going to be, I don’t have a strong view.

Unlike the remotely piloted aircraft business, though, I don’t want to get 10 years into it before we start asking those questions, right? And I do think we’ve learned those lessons. People are already trying to get ahead of that concern, but I think you’re right—it’s a real concern.

Thank you.