On the topic of cybersecurity, assessment of President Joe Biden’s recent summit with Vladimir Putin has been tepid at best, and at times approaching glacial. Much was demanded. Concern over Russian involvement in cyberattacks against US interests has grown over the last few weeks—and grabbed popular attention more resolutely—after ransomware criminals believed by many to be supported by the Kremlin (read: they are supported by the Kremlin) breached computer networks owned by the Colonial Pipeline and the beef processing company JBS.

Both attacks have helped drive home the idea that cybersecurity issues can have a direct and significant effect on everyday life—even down to causing such quotidian concerns as to whether you have enough gas in your car to go to the store for ground beef that isn’t there. Biden’s response, including handing Putin a list of 16 sectors that are off-limits to cyberattacks, is rightly regarded as insufficient, if not ridiculous. But it also illustrates insufficiencies in how cyberattacks are conceptually understood.

For instance, no US president would ever offer a list of assets to an adversarial counterpart and demand they not be bombed with conventional munitions. This is for the simple reason that there’s nothing to be done but to point to a map of the United States and say, “don’t bomb any of this.” Grounded in commonsense, our cyber policy should be just as comprehensive. If we wouldn’t tolerate an adversary bombing it, we probably shouldn’t tolerate their hacking it. While there’s something to be said for clearly stating which lines are red lines and will lead to a particularly muscular response, it’s critical that those who mean us harm understand that any attack against the United States, whatever the level, is unacceptable and that responsible governments—whether directly involved or merely supportive—will be held accountable for whatever consequences emerge from whatever kind of attack, at any level, and no matter the target.

Another way of saying this is that instead of lists of off-limited targets, we simply give our adversaries an understanding of what peace with the US looks like, and we then remind them that if they mess with us that we will respond with force appropriate to the attack, to stopping it, and to ensuring that it cannot happen again. This is to say that addressing cyberattacks is essentially no different than addressing any kind of attack. Therefore, just war tradition, though crafted over centuries well before such things as cyberspace could even be imagined, is nevertheless well-fitted for helping us think through familiar issues of just cause and discriminate and proportionate response.

Of course, this doesn’t mean that things don’t get complicated. For starters, many cyber operations do not neatly meet standards set by the just war criteria. Often, cyberattacks fall short, or seem to, of being actual acts of war, occupying instead a “grey zone” between traditional war and peace distinctions. While the just war criteria assist us in understanding when lethal force is permissible, how do we ethically evaluate permissible responses to operations that are either only potentially harmful or harmful to a less-than-lethal extent?

I was involved in a recent discussion in which the following hypothetical was raised. Imagine a scenario in which a busy, large-city, emergency response infrastructure fails, taking out its Computer-Aided Dispatch (CAD) system. The CAD system failure raises emergency response times from 10 minutes, to 30, increasing mortality by 25 percent. If there were strong reasons to suspect Russian hackers initiated the system failure, would the consequent harms justify a lethal response? Many of the discussants were unsure. Much of this uncertainty had to do with the problem of attribution, in which it is notoriously difficult to reach certain thresholds of confidence as to who is behind a particular cyberattack. What would the just war tradition have to say?

Because the attack on the CAD system resulted in deaths due to the increased emergency response time, I would argue that a just war analysis of just cause would allow that a lethal response would in fact be permissible. An attack has occurred; people have died. Justice demands both that the victims be vindicated, and that the injustice be punished.

What about cyber operations that do not directly lead to deaths? For instance, as we’ve recently seen, cybercrime and cyber-espionage by foreign actors lead to extensive loss of commercial intellectual property. For the most part, these losses have amounted to less than the total GDP growth and so they are, in a sense, absorbable. What would happen were this to change? If the costs of malicious cyberpiracy were to increase so that US business losses led to massive layoffs such that unemployment skyrocketed, straining already strained safety nets, what then? If Chinese military cyber-forces were behind a significant degree of the operations, what kind of US response would be permissible?

It’s easy to suggest that non-lethal operations do not warrant lethal responses. But surely, in the scenario above, the lines blur. The just war tradition generally presumes that an adversary has rendered themselves liable to be killed when they have forfeited their right not to be harmed by culpably employing, themselves, lethal force—or potentially lethal force—against the innocent. Some would argue that piracy—even when it wreaks havoc on the economy—does not rise to that level of liability, given that such piracy only leads to reduced standards of living. But this is, surely, saying too little. History—including present history—has shown time and again that economic crises have a real impact on human flourishing and have very real-world effects on mental and physical health, including to lethal dimensions. Direct attacks against the economy, especially when they cast human beings into poverty, amount to direct assaults upon the common good, with life and death consequences. Again, in the face of such attacks, I would argue that justice allows for responses that rise even to the use of lethal force.

The point here is that proportionality does not mean, simply, responding with the degree of force with which you have been hit. Rather, proportionality is properly aimed at utilizing the degree of force necessary to protect the innocent, to take back what has been wrongly taken, and to punish to a degree sufficient to restrain current wrongdoing and deter future wrongdoing. To be sure, proportionality demands limits to this, but the responsible use of force would err, as in cancer surgery, toward wide margins.

Other constraints that shape our response include discrimination. As touched on above, attribution is often difficult to assess in cyber operations. One obvious aspect of cyber-defense, therefore, is investing in those capacities to improve our ability to determine from where attacks have occurred. We need to think carefully, as well, about the degree of certainty that is required before we act. Is certainty sufficient, or do we require provable certainty? The dimensions of the original attack might help shape the requirements of response.

Moreover, prudence—just as in just war reasoning—always requires that we carefully assess the risks of escalation. Cyber operations, operating in that space prior to war, can easily lead to responses that end up triggering full-out combat. Presumably, there are times in which this simply cannot be helped.

My point here is only that while cyber operations offer new challenges, they are challenges that can be met using old-world moral frameworks that have long helped both the church and the state think through complex moral issues between international rivals. The just war tradition remains up to the task of engaging cyber.