Experts vs. Models: Who Wins?

A few recent articles have offered some very interesting data on the merits of quantitative analysis, and the limits of “expert” analysis.

One is a Greenbackd.com review of Ian Ayres’ book Super Crunchers (thanks to The Stingy Investor blog for highlighting this). According to Greenbackd, Ayres’ book provides several examples of instances in which statistical models or algorithms went head to head with experts and won, including one study that pitted a statistical model against experts in an attempt to predict the outcome of Supreme Court cases.

“The experts lost,” Ayres writes, according to Greenbackd. “For every argued case during the 2002 term, the model predicted 75 per cent of the court’s affirm/reverse results correctly, while the legal experts collectively got only 59.1 per cent right. The computer was particularly effective at predicting the crucial swing votes of Justices O’Connor and Anthony Kennedy.”

According to Ayres, it was no anomaly.

Instead, it “is representative of a much wider phenomenon,” he says. “Since the 1950s, social scientists have been comparing the predictive accuracies of number crunchers and traditional experts – and finding that statistical models consistently outpredict experts. But now that revelation has become a revolution in which companies, investors and policymakers use analysis of huge datasets to discover empirical correlations between seemingly unrelated things.”

Another good piece on the topic comes from Harvard Business Review’s Andrew McAfee. In a recent piece on HBR’s web site, McAfee writes that relying on human intuition can be dangerous. “A huge body of research has clarified much about how intuition works, and how it doesn’t,” he says, offering a few examples of what he says that research has shown:

  • It takes a long time to build good intuition.
  • Intuition only works well in specific environments, ones that provide a person with good cues and rapid feedback.
  • We apply intuition inconsistently.
  • It’s easy to make bad judgments quickly.
  • We can’t know where our ideas come from.

McAfee points to a couple of studies in which statistical models outperform experts. “But aren’t there at least as many areas where the humans beat the algorithms?” he adds. “Apparently not. A 2000 paper surveyed 136 studies in which human judgment was compared to algorithmic prediction. Sixty-five of the studies found no real difference between the two, and 63 found that the equation performed significantly better than the person. Only eight of the studies found that people were significantly better predictors of the task at hand.”

Nonetheless, McAfee says experts shouldn’t be completely cast aside. In many cases, a combination of statistical models and human decision-making are advisable, he says. Click here to read the full article.

Send a Comment

Your email address will not be published. Required fields are marked *