As anyone who reads my articles likely already knows, I am a big believer in quantitative investing. I think the ability of systematic models to limit the role of human emotion and biases in the investing process is a big advantage over discretionary approaches.
But even as those of us who are quants talk about the many benefits of our approach to investing, we also need to acknowledge that we may be a little more like our discretionary friends than we care to admit. The reason is that there is a big difference between reducing the role of human decision making in the investing process and eliminating it.
There is a tendency for those of us who support quant models to talk about them as if they are these things that just run on autopilot over the long-term that are free from all the decision-making issues that plague us as human beings. That just isn’t the case, though.
Don’t get me wrong, there is no doubt in my mind that this is an area where quant models excel relative to their discretionary counterparts. In my opinion, the more a strategy can setup a set series of rules that govern what it does, and the more consistently it can follow those rules, the better chance it has to generate strong returns in the long-term. But behind those rules will always be a person (or a team of people) that will have to make a series of decisions about that strategy that will have a significant impact on how it performs over time. For that reason, the idea of the purely quantitative strategy is more of a myth than a reality and it is likely just as important to evaluate the people and process behind a quant strategy than the strategy itself.
Here are a few ways that human decision-making plays a role even the best quantitative strategies.
 The Initial Construction
The first step in building any factor-based strategy is determining what goes into it. This may seem like a fairly simple process in theory, but in practice it is much more difficult. Let’s use the example of a value strategy. First, there is the basic question of how to define value. Do you define it with a single metric or do you use a multi-metric composite? If you use a composite, which metrics should go into it and how do you weight them? And those questions are just the beginning. Beyond that, there are questions of what investment universe to start with, how many stocks to hold, when to rebalance, whether to filter certain types of stocks out etc. All of these decisions require human intervention and a thoughtful decision making process.
 The Evolution
It would be great if quantitative strategies could be run using a set it and forget it approach. If the value strategies from 30 years ago worked just as well now as they did then without revisions, it would make all of our lives as value investors much easier. But that isn’t reality. Running a quant strategy in the real world requires the flexibility to update it when the evidence supports a change. And that decision as to whether a change is supported by the data typically has to be made by a human manager. It takes a unique skill set to weigh long-term evidence against a changing series of facts to determine when to make changes and what changes to make. If a portfolio manager makes the wrong decisions, it can reduce or eliminate any premium associated with the factors the portfolio is following.
Let me give you an example. In the first edition of What Works on Wall Street, Jim O’Shaughnessy used the Price/Sales as the primary metric in his value strategy. But over time, he recognized that using a composite of metrics is a better method than just using one since it limits the risk associated with that one factor. As a result, his value model in the most recent edition uses a composite of factors. That process of evolving a strategy over time and using evidence to guide the changes is a very important one.
 Determining When a Factor Fails
Most of the time, the process of updating a quantitative strategy is a matter of evolution. But sometimes it takes more than that. Sometimes the evidence suggests that the foundation of the portfolio is based on something that no longer works. Let’s take the Price/Book ratio as an example. The Price/Book is the foundation for much of the academic research that supports value investing. As a result, it remains the most widely used value factor in terms of the assets that follow it. But does using the Price/Book still make sense in a world dominated by intangible assets that don’t show up in book value? A strong case can be made that it does not.
Given that intangible assets are much less of an issue among the cheapest stocks than they are in the growth space, though, you could also argue that avoiding the factor all together is too extreme. The right answer might be also be in the middle, and rather than just discarding the factor, another option would be to recalculate the Price/Book using a system that tries to measure intangible assets.
Regardless of your opinion on which option is correct, a thoughtful portfolio management team is needed to make it.
The Proper Amount of Discretion
None of this is meant to take anything away from quantitative models. In my opinion, they are the best thing we have for investors who want to take advantage of the factors that work over the long-term. And I am a big believer that the fact that quant models can automate much of the investing process is a big advantage. But that doesn’t mean everything can or should be quantified. As a result, a strong investment process is very important too. Machine learning may eventually change all of this and we may truly be able to eliminate human decision making from the process. But for now, the purely quant model is more of a dream than a reality.
Jack Forehand is Co-Founder and President at Validea Capital. He is also a partner at Validea.com and co-authored “The Guru Investor: How to Beat the Market Using History’s Best Investment Strategies”. Jack holds the Chartered Financial Analyst designation from the CFA Institute. Follow him on Twitter at @practicalquant.