Writing in Adviser Perspectives, mathematician and economist Michael Edesess discusses Philip Tetlock’s work examining “superforecasters” with co-author Dan Gardner. Superforecasters, as Edesess explains, are the few volunteers in Tetlock’s forecasting study who “measure as significantly and consistently better than the others and much better than a dart-throwing chimpanzee.”
Tetlock defines “foxes” and hedgehogs” among the large number of forecasters enrolled in his study. As the Greek poet Archilochus put it, “the fox knows many things, but the hedgehog knows one big thing.” Edesess explains: “Tetlock found that hedgehogs, who were certain about the one big thing they believed they knew, were worse forecasters – even (and, in fact, especially) about that one big thing – than foxes, who knew many little things but were uncertain about what they knew.” He notes, “the forecasters who were wracked with uncertainty did better at forecasting than those who were not in the least wracked with uncertainty.”
Edesess notes that Tetlock uses two measures – calibration and resolution – to distinguish among forecasters. This produces what is known as a Brier score (named for its developer, Glenn Brier). Further, “you can calculate the Brier score only if you can determine whether the forecast came true or not.” Thus, to quote Tetlock, the predictions must address “clear questions whose answers can later be shown to be indisputably true or false.” Edesess explains: “a precise forecast must be about a specific event, occurring within a specific future period of time.” An example of the opposite – predictions which cannot be scored in this way – is the debate between Keynesians and Austerians during the and after the financial crisis.
So, what makes a superforecaster? Edesess quotes Tetlock and Gardner at length. They wrote that “superforecasters often tackle questions in a roughly similar way – one that any of us can follow.” It includes “unpack[ing] the question into components,” distinguishing “between the known and unknown and leave no assumption unscrutinized.” Also, “adopt the outside view and put the problem into a comparative perspective,” and “then adopt the inside view that plays up the uniqueness of the problem.” Further, “explore similarities and differences between your views and those of others” while paying “special attention to prediction markets and other methods of extracting wisdom from crowds.” Once these views are “synthesize[d] . . . into a single vision,” one should express judgment “using a finely grained scale of probability.”
Edesess also discusses Tetlock and Gardner’s self-critical approach, which enhances their credibility, as well as the application of their discussion to the world of leadership where firm decisions are necessary. Interested readers may also want to see this blog’s prior post on improving forecasting skill, also drawn from Tetlock’s work, here.