After the Iraq War failures when the intelligence community had confidently concluded that Saddam Hussein had an active weapons of mass destruction (WMD) program, the government created the Intelligence Advanced Research Projects Activity (IARPA) in 2006. Just as DARPA researches military technology (like stealth planes), IARPA funds research to improve America’s intelligence, including a tournament starting in 2011 when academics tested new methods to forecast geopolitical events (pp. 16-18, 84-91). In Superforecasting: The Art & Science of Prediction, Philip Tetlock and Dan Gardner explain how Tetlock’s project excelled in this tournament, and his best forecasters, or “superforecasters”, even outperformed professional intelligence analysts with classified information (p. 95). They argue convincingly that studying how the superforecasters think can help other analysts—whether in business, government, or otherwise—improve their forecasts.

Discovering Superforecasters

During the tournament, Tetlock first recruited thousands of participants, who tended to be intelligent yet still average people, and asked them short geopolitical questions such as, “Will India or Brazil become a permanent member of the UN Security Council in the next two years?” The carefully-worded questions would always include a short time frame (forecasts beyond five years proved no better than guessing) so that there would be a definitive answer of whether the forecast was accurate or not (pp. 2, 52-53, 244). At first the participant usually lacked significant knowledge on the topic and would have to conduct research. After reaching a conclusion, he or she would submit an answer online in a percentage form (say, 8% chance India or Brazil becomes a permanent member).

Giving an initial answer was not the end. The participant could still change the forecast in light of new developments (pp.153-155). If Obama speaks positively about India becoming a new member, the forecaster might increase the odds slightly, perhaps to 9%. If Putin strongly says he will never accept a new permanent member on the Security Council, the odds would drop significantly, perhaps to 3%. When scored, each change would count as a separate forecast (p. 92).

If after the two years neither Brazil nor India becomes a permanent member on the UN Security Council, those participants who forecasted this outcome would receive a better “Brier score”. If, however, Brazil became a permanent member, those who said this outcome’s odds were 95% would receive a better score than those who said the odds were 90%. Meanwhile, those who said the odds were 5% would be punished more than those who said 10% (pp. 64-66). Participants with the best Brier scores would become the “superforecasters”.

Studying how these superforecasters outperformed everyone else provides fascinating lessons that Superforecasting explains in clear language. As the authors argue, superforecasters are normal humans who are not clairvoyant, omnipotent, or god-like. They usually have the same amount of intelligence as someone who would read a book about forecasting and psychology (or perhaps someone who would get this far in a review about that book). Therefore, their lessons can be taught and learned.

Foxes and Hedgehogs

One key lesson is that superforecasters tend to be “foxes” instead of “hedgehogs”. The labels “fox” and “hedgehog” are ways to categorize analysts, and they reference the ancient Greek poet Archilochus who said, “The fox knows many things, but the hedgehog knows one big thing.” A hedgehog may specialize in one theory (say, Keynesian or Hayekian economics) and knows it backwards and forwards. He only, or at least mostly, sees the world through that paradigm. “Is the economy bad? Well, then pump more stimulus into the economy like Lord Keynes would do! The economy is still bad? Then there wasn’t enough stimulus, or we should just wait (maybe decades) until my predictions come true!” For the hedgehog, there are no shades of gray because everything is black and white. There is little uncertainty (p. 69-72).

The fox, however, may not know a single theory or topic as well as the hedgehog, but she knows enough about many. Foxes are like the “two-handed economists” who give conflicting advice by saying, “On the one hand… on the other.” For instance, she may say, “On the one hand, the government could spend more to stimulate the economy and prevent a deeper recession. On the other hand, spending more could cause certain asset prices, such as property, to balloon and then burst, which could cause another recession.” Though she may not appear as confident while explaining her nuanced reasoning, the fox can better aggregate several theories and topics into a single analysis inside a single brain (p. 74).

Superforecasting explains with a tinted-glasses metaphor. The hedgehog can only see the world with green-tinted glasses, which can help him see certain green details more clearly. Meanwhile, the fox can switch between green-tinted, red-tinted, blue-tinted, and many other glasses, which helps her see much more detail. The authors elaborated:

But far more often, green-tinted glasses distort reality. Everywhere you look, you see green, whether it’s there or not. And very often, it’s not… So the hedgehog’s one Big Idea doesn’t improve his forecast. It distorts it. And more information doesn’t help because it’s all seen through the same tinted glasses. It may increase the hedgehog’s confidence, but not his accuracy… When hedgehogs in the EPJ [Expert Political Judgement, Tetlock’s previous project] research made forecasts on the subjects they knew the most about—their own specialties—their accuracy declined. (p. 71)

Anyone interested in becoming a superforecaster, or at least a better forecaster, should thus learn how to think like a fox.

Though one major problem is that no one watching CNN wants to see two foxes give nuanced justifications or speak in shades of gray (p. 72). Watching two overly-confident hedgehogs go after each other is much more entertaining, like two gladiators in the Coliseum.

Winners Keep Score

Another key lesson is that someone who wants to improve his or her forecasting must keep score by giving a specific time frame, a percentage chance, and then following up to measure what happened. Without this information, forecasters won’t be able to understand why they were wrong or even if they should improve or not (pp. 179-192). Similarly, a runner who does not time his runs won’t know if his training works or not.

While keeping score, using a percentage is critical. Simply writing down that there is a “serious probability” something may happen is not enough. “Serious probability” could mean the chances are anywhere from 20% to 80%. If the forecast does not come true, the analyst could easily say he meant there was only a 20% chance, or he may say he meant 80% if the forecast does come true. He may even believe this because he misremembers what he thought (pp. 55-56).

Personally, when talking to friends and colleagues about possible future events, using percentages can help clarify what we mean. A friend could say, “Brexit may cause the EU to collapse!” What does he mean? Do the odds of an EU collapse in 4 years increase from 20% to 25%? Or do they increase from 45% to 60%? Without this information, understanding what he means and keeping score is impossible.

Using percentages this way may require the public to adjust its thinking. Some, perhaps most, people can have difficultly comprehending a percentage-based forecast. Superforecasting uses this example: a meteorologist may say there’s a 70% chance of rain, and many conclude the forecast was wrong if it does not rain. But it wasn’t wrong. In fact, if it rained 100% of the time the meteorologist said there was a 70% chance, the meteorologist would need to adjust his under-confident forecasting (pp. 57-60).

Moreover, including percentage forecasts on news articles and op-eds may not translate well for most readers, at least in the short term. A gradual introduction would be best.

Many may become frustrated wading through analysts’ uncertainty expressed through percentages, as President Obama may have experienced when deciding whether or not Osama Bin Laden was hiding at a compound in Pakistan (pp. 130-136). Yet, as Superforecasting argues, using percentages is the best way for the intelligence community to avoid another massive misanalysis like the Iraq WMD conclusion, which the authors conclude was a reasonable forecast but should have been expressed as a 60-70% chance instead of a 99-100% chance (pp. 81-85, 252).

Even Humanities-Lovers Can Understand

Superforecasting has many, many more lessons about how superforecasters and foxes think and how the reader can improve his or her forecasting. Whether explaining fat tails, regressions to the mean, or other concepts, the authors succinctly break down necessary mathematical or psychological ideas and explain them in an approachable way. Other books explain some of these concepts, but Tetlock and Gardner offer the clearest, best descriptions I have read thus far. Even someone who dreads math and loves the humanities can understand and apply them.

Overall, Superforecasting has benefited my own thinking as I analyze not only geopolitical events but also many other issues. Christian realists, who must understand the world as it is while responding to real-world events, would also benefit from learning how foxes and superforecasters think. Having more-accurate forecasts would allow Christian realists to develop better policy recommendations that promote a more just and peaceful world.

Mark Melton is the Deputy Editor for Providence. He earned his Master’s degree in International Relations from the University of St. Andrews and has a specialization in civil conflict and European politics.

Photo Credit: By Peyman Zehtab Fard via Flickr.