In Israel’s tabernacle, the high priest stepped into the Holy Place carrying the tribes over his heart. Their names were engraved on twelve stones fastened to a breastpiece of judgment—an embodied reminder that this people existed by grace, not merit. Hidden in that same vestment were two mysterious objects: the Urim and Thummim, “lights and perfections” (Exodus 28:30), through which the Lord sometimes gave clear, binary answers about war, kingship, and justice.

Today the jeweled breastpiece has been replaced by glowing screens. The high priest has been displaced by machine-priests: engineers, data scientists, lawyers, and intelligence officers who tend our new “oracles.” From AI-assisted targeting in Gaza to predictive-policing and welfare algorithms, we increasingly entrust grave decisions to opaque systems whose inner workings almost no citizen can see. This begs the ethical question: To whom are we willing to give the power to speak with apparent authority about life and death, guilt and innocence—and on what terms?

The rise and disappearance of the Urim and Thummim offers Christian realists a guide for thinking about AI oracles in an age of war.

When God Turned Off His Own Oracle

Scripture tells us very little about the Urim and Thummim. We do not know their size or shape, or how often they were used. We know only that God commanded Moses to place them “in the breastpiece of judgment,” and that they could be consulted at critical moments: “Shall I go up? Will Saul come down?” (1 Samuel 23:9–12). For a people still learning to trust the Lord in concrete history, that clarity was a gift. Israel needed to know that YHWH was not like the mute idols of the nations. Yet even then, the oracle was never a shortcut around moral formation. The same Torah that institutes the Urim and Thummim commands judges to hear witnesses, weigh evidence, and refuse bribes. God never meant Israel to live by oracles alone.

Most striking of all, by the time of the exile and restoration the oracle simply vanishes. Ezra speaks of disputes that must wait “until a priest with Urim and Thummim should arise” (Ezra 2:63), implying that in practice no such guidance is available. God withholds the old device without abandoning his people. He forces them to live by the written Word, the prophetic tradition, and the slow, painful work of communal discernment. In other words, God disables his own “black box.” He refuses to keep Israel in a perpetual childhood of oracles. He prefers to form a people who can judge, repent, and act under his providence.

That pattern ought to sober us as we hurriedly enthrone new oracles of our own.

Maven, Gaza, and the Donbas

Israel has already begun to harness the power of AI for its military. After Hamas’s October 7, 2023 massacre—the deadliest day in Israel’s history, killing around 1,200 people and taking roughly 250 hostages—the Israel Defense Forces shifted into a grinding urban war against a terrorist organization embedded in civilian neighborhoods and infrastructure. 

Israeli officials have described the campaign as “the first robotics war,” with “tens of thousands of autonomous systems” deployed across all domains. Small ground robots, loitering munitions, unmanned naval craft, and AI-assisted targeting platforms are presented not only as instruments of lethality but also as a way to protect IDF soldiers and to make strikes more discriminating in dense civilian environments. 

But the deeper continuity is not the hardware. It is the posture: commanders, legislators, and citizens standing before systems they cannot see into, receiving outputs they cannot fully explain, yet feeling pressure to treat those outputs as authoritative. The temptation is to imagine that the oracle itself resolves our moral dilemmas—that if “the system” says a building is a command node or a person is “high-risk,” we can dodge the burden of judgment. Biblically and politically, that is backwards.

Israel, Hamas, and the Law of Nations

In Israel, AI and law now intersect sharply. Israel has built a dense network of legal advisers around its targeting processes. Analyses of IDF practice describe legal advisers as an “integral facet” of operational planning and targeting, with a distinctive Military Advocate General’s Corps and intensive legal review. That does not settle arguments about proportionality, siege, or urban bombardment in Gaza, especially in light of staggering Palestinian casualties and critiques from bodies like the UN human rights office. Christian realists should neither romanticize Israeli actions nor accept Hamas’s propaganda. But it does matter that Israel’s use of AI-assisted targeting sits inside an existing, heavily lawyered framework rather than in a moral vacuum.

To be “pro-Israel” in a Christian realist sense is not to baptize every IDF operation. It is to hold together three truths: moral clarity about the asymmetry between Hamas’s deliberate terror and a flawed but law-bound state; honest grief for innocent life lost on both sides and a willingness to scrutinize Israeli tactics without demonization; and sober attention to how Israel’s choices about AI, robotics, and law will shape global norms for decades to come.

What Our New Oracles Cannot Do

Christian realists affirm that political authority is both ordained and limited. States are called to reward good, punish evil, and secure a basic public peace. They are not given omniscience, and they remain accountable to God and neighbor for their judgments—especially when they aim lethal force at a city block, a refugee boat, or a prison cell.

Our AI oracles invite us to forget this in three connected ways. First, predictive-policing and risk-scoring systems promise a kind of artificial omniscience; they sift vast data sets and begin to feel more trustworthy than the messy people they describe, making it easy to see neighbors as probabilities rather than as bearers of the divine image. Experiments with tools like the COMPAS risk-assessment system and investigative work such as ProPublica’s “Machine Bias” have already sparked a deep debate about how such systems can encode and mask racial bias. 

Second, algorithmic tools blur responsibility. When an AI-assisted drone hits the wrong car or an asylum-seeker is wrongly flagged as fraudulent, it becomes tempting to hide behind “the system.”  We must still be able to say, in specific cases: this use of AI in targeting, policing, or welfare is unjust, even if every box on the form was checked. Ongoing efforts like the EU AI Act and its rules for systems in law enforcement, migration, and public services, or its provisions on human oversight, may help—but no regulatory framework can replace moral judgment. 

Third, AI systems tempt us to swap the doctrine of providence for a doctrine of probability. A neighborhood labeled a future crime hot spot, or a population assigned a high fraud-risk score, can quietly be treated as if its destiny were fixed. Biblically, that is false. The God who chose Israel “not because you were more in number…but because the Lord loves you” (Deuteronomy 7:7–8) and grafts unlikely branches into his olive tree (Romans 11) does not bind himself to our confidence intervals.

A Christian-Realist Way of Living with Black Boxes

What, then, might a Christian realist approach to algorithmic oracles look like?

At minimum, it means drawing bright lines around decisions that must not be delegated to opaque systems: final positive identification for bombing targets in dense cities; decisions to deny asylum or deport families; long-term incarceration based on risk scores. Tools may inform judgment but must not replace it.

It also means demanding intelligible explanations and real avenues of appeal wherever authorities use AI: clear documentation of intended use, independent auditing for discriminatory impact, and procedures by which ordinary people—not just experts—can challenge the oracle’s verdict. In biblical terms, the “gate” of justice must remain a place where the poor and the stranger can make their case, not a sealed portal guarded by proprietary code.

Finally, Christian realists must keep responsibility personal and ecclesial. Dense legal-adviser networks and elaborate doctrine are not reasons to relax; they are reasons to ask sharper questions: which commander ultimately said yes? Who will answer before God for civilian blood and unjust wars? The Urim and Thummim were real, and their lights once flickered in the dark of the priest’s breastpiece—but God did not let Israel keep them forever. So too with our black boxes. No oracle—human or machine—can absolve us of the call to love God, love our neighbors, and govern as those who must one day give an account before the judgment seat of Christ.