Too often on tough and contentious issues, Christians react and do not think with the best information or nuance. Evangelicals in particular have a strong tendency toward moralism and simplicity when it comes to many issues because they read through the frame of a culture war, which means there is a “right” side and a “wrong” side. Instead of appreciating the complexity of most political, cultural, and social issues, evangelicals take up sides and begin sloganeering. But the vast majority of big questions we face now as a society are not matters of right and wrong.

To be fair, this is a broader problem with our own time and place. Americans have become a deeply reactive society that vents anger and reacts before knowing all the facts. Some event fits into a preconceived framework, so they do not bother to think something through or appreciate the nuance or even wait for the facts to come out. Better to just create more grist for the fear and anger mill.

From the get-go, artificial intelligence has elicited strong responses. I confess that AI freaks me out. Growing up watching Terminator movies and the never-ending genre of dystopian sci-fi thrillers, I have a deep and abiding skepticism of technology. Artificial intelligence conjures up the deepest of fears because the possibilities for destruction and threat to human life are so rich and seemingly endless. Perhaps no issue since nuclear weapons portends such dire possibilities for human life.

How then should Christians address the question of AI?

Acceptance

Like nuclear weapons and other technological advancements that presented serious moral challenges, Christians need to start with a big dose of realism. AI is not going away any time soon, nor should it. Most of the ways that AI is deployed are immensely beneficial and non-threatening.

Those who present the AI issue in apocalyptic terms do not fully appreciate this fact. AI has benefited everything from medicine to driving cars, though most people do not think of it as AI. For one, AI is not one thing. The spectrum between automatic systems that function at a low level of decision making to fully autonomous self-directed systems is broad, and rarely if ever do they take humans out of the loop—in other words, humans still participate.

New cars today have several features that would be considered AI. Warning features that alert us to cars in our blind spots or automatic braking systems are just a few. These systems still require a human “in the loop” to operate, but these systems function with a certain amount of autonomy from human control.

We have been using autonomous features in weapons for a long time, so the principle of autonomy is not new. The Germans used precision-guided munitions in torpedoes on U-boats during World War II. Missile systems often use a certain amount of limited autonomy to perform functions that humans cannot perform well. Tomahawk Anti-Ship Missiles were the first fully autonomous weapon. Developed in the 1980s for use against Soviet naval ships, after launch the missile would search and engage a target independent of human decision making.

When it comes to weapons systems, the likelihood of fully autonomous weapons is more fiction than reality. Neither the US nor China will deploy fully autonomous weapons in the near or medium-term, but autonomy in weapons is being realized in weapons and weapons systems to varying degrees. Humans remain in the loop and in control.

China desires to gain a competitive advantage in AI and has committed extraordinary resources to AI research and development, which should concern America and its allies. That said, the AI genie is out of the bottle. It cannot be put back in. Instead of seeking ideal solutions that have no possibility of effecting real change, we should seek restraints and solutions that have some prospect of addressing the very real concerns that nations have.

Killer Robots Are Not Our Problem

Human Rights Watch, an international NGO, is campaigning to have so-called “killer robots” banned before they become a reality. The campaign has all the hallmarks of humanitarian campaigns that seek to gain attention but do not effect real change. For starters, the goal of preemptively banning AI weapons stands zero chance of success with none of the major players even considering a unilateral ban. Similar to the International Campaign to Abolish Nuclear Weapons (ICAN), which promotes the idea of a world free of nuclear weapons, the killer robots campaign has no real chance of shaping policy or practice. Rather than promote solutions that stand no chance of being implemented, Christians should look at the real threats and possibilities.

At the University of Chicago, there is a memorial on top of the racquetball court where the Manhattan Project produced the first sustained fission reaction. I used to live in a building right next to it, and from time to time anti-nuclear-war protestors would come to hold protests against nuclear weapons. Nobody paid attention except for the odd passerby. Enacting a blanket policy on AI has the potential to be like these protesters. Nations would ignore the ban, and it would have no real impact on bringing some sort of weapons control regime that would curb the weapons’ worst effects.

Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security, writes in his book Army of None that the problem is fundamentally rooted in a lack of clarity by activists:

It is entirely reasonable that states and individuals who care a great deal about avoiding civilian causalities are skeptical of endorsing a ban when they have no idea what they would actually be banning. Automation has been used in weapons for decades, and states need to identify which uses of autonomy are truly concerning. Politics gets in the way of solving these definitional problems, though. When the starting point for discussions is that some groups calling for a ban on “autonomous weapons,” then the definition of “autonomous weapons” instantly becomes fraught. (349)

Though sober and realistic about the prospects for restraining the use of AI in weapons, Scharre does believe there are options that countries could fruitfully pursue. One proposal is to start with a ban on antipersonnel autonomous weapons that target people, since the ban would be clear and countries developing AI would be more motivated to implement the ban. Another option Scharre proposes is creating “rules for the road” in the development of autonomous weapons. These guidelines would not be enforceable but would establish some standards that states would be motivated to follow if relevant parties adopted them.

Just War Considerations

Are autonomous weapons immoral, and should Christians committed to just war ethics reject them? Joe Carter implies that autonomous weapons are inherently immoral and should be banned on just war grounds. My response would be: Which weapons? How much autonomy? Does this mean all weapons that have some level of autonomy should be banned? On what grounds? Many autonomous weapons are defensive in nature, but should they be banned? Carter is right that robots have a very hard time determining context and nuance, which is why humans are in the loop and will be in the loop. As a blanket claim, his position is overly broad and hard to justify on just war grounds.

Any restraints on the use of certain weapons must also consider how other countries would use and deploy them. Americans unilaterally banning autonomous weapons while the Russians and Chinese move full steam toward their use is about as wise as the US unilaterally banning nukes. Practicality has to be part of any moral calculus.

What Carter does not recognize is that autonomous weapons when deployed competently and carefully can be extremely discriminating, often more so than a human. The default belief that humans are more accurate or reliable is just not true. While technology presents us with challenges, it also presents us with benefits. Discrimination and proportionality have been major gains in contemporary warfighting. Despite the doomsday thinking, we have been moving toward greater and greater discrimination, not less, which is in large measure due to advancements in weapons systems.

We should always hold technological advancements up to great scrutiny and demand our militaries uphold just war principles. What is needed now and in the future is subtlety and realism. What will work? How can we begin curbing the worst aspects now? What are the most morally fraught aspects of autonomous weapons? If we start here, we stand a better chance of harnessing the positive benefits of AI while restraining its potential abuse.