Five Questions: Hidden Risks in Investing with Corey Hoffstein

By Jack Forehand, CFA (@practicalquant)

Investing in the stock market is a risky activity. The obvious risks like losing a large portion of your portfolio during a bear market, or underperforming the market by a wide margin if you employ an active strategy can be more than many investors can handle.

Given th high level of risk that is inherent in investing, it doesn’t make sense to take more risk than you have to. But many of us do. The reason is that we often take risks that we don’t even know we are taking.

My go to source when I want to learn more about the topic of risk is Corey Hoffstein at Newfound Research. Unlike many firms that focus primarily on return and treat risk as a secondary consideration, Newfound has built their firm around making risk the primary consideration.

Corey’s articles have led me to rethink risk, and how to best deal with it. They also often highlight risks that many investors aren’t even aware of. These hidden risks can pose a significant problem because they are often uncompensated, meaning you don’t get any extra return to take them. Eliminating uncompensated risk can be one of the few free lunches in investing where you get something and don’t have to give up something else in return.

In this week’s interview I talk to Corey about some of these risks in addition to some other issues surrounding risk and investing.


Jack: Thank you for taking the time to talk to us.

Many investors who utilize factor-based approaches have a favorite metric they like to use. Many value funds, for example, utilize the Price/Book ratio. With the Price/Book struggling even more than other value metrics in recent years, that has been a problem and has led to even deeper struggles than value in general.

This issue of whether to pick a specific metric for a factor like value or to just pick them all using a composite to avoid taking unnecessary risk is one that many factor investors struggle with. If an investor thinks one metric is clearly superior, and they pick the right one, they can boost their performance, but if they are wrong, they have taken additional risk and reduced their return in the process, which obviously isn’t ideal. How do you look at this issue of specification risk and when investors should pick one metric and when they should just use a broader approach?

Corey: I have taken to calling this decision “style versus specification.”  To keep with your example, value is the style and the metric of choice, like price-to-book, is the specification.  There is no shortage of potential specifications.  A survey of popular value indices finds a breadth of potential measures, including price-to-book, price-to-earnings, price-to-dividend, price-to-cash-flow, enterprise-value-to-EBITDA, and price-to-sales to name a few.

Historically, value as a style has worked regardless of the metric chosen.  In the short run, however, specification choices can lead to significant performance deviations.  Over longer rolling periods – say 30 years – smaller annualized differences still compound to meaningful dispersions in terminal wealth.  Which matters for investors who ultimately want to spend that wealth.

Without a particular view as to which metric will necessarily perform best going forward, it seems prudent to employ a diversified approach. 

With value in particular, I think this has the added benefit of providing multiple perspectives of how value can be measured.  Often one metric may do well identifying value in one type of business, but fare less well for another.  For example, cash-flow based measures have the advantage of being capital-structure neutral, but are not necessarily applicable across industries with different capital expenditure needs.   Sales-based measures can help identify value in firms with high reinvestment, but at a certain point we want to see actual earnings.

By using multiple measures, we can potentially better triangulate on value as an overarching style and better avoid specification-driven value traps.

The same case can be made for other quantitative styles as well. 

Jack: You gave an excellent presentation at the Alpha Architect conference last year about timing luck and how it can introduce risk into investor’s portfolios that they aren’t aware of. You gave an example in the presentation that showed that a 60% stock 40% bond portfolio can have appreciably different returns than another 60/40 portfolio based on when each portfolio is rebalanced during the year. If those differences are enough to cause one investor to panic, while another stays the course, this seemingly small risk can actually have a significant impact on long-term returns.

How do you look at the issue of timing luck and what do you think is the best way for investors to deal with it?

Corey: I believe there are really three different axes of diversification: what, how, and when

What will be the most familiar to investors, as it defines what we are invested in.  In other words, it captures asset-class (or correlation-based) diversification. 

How – which we touched upon briefly in the question above – asks about process-based diversification decisions. 

When represents when decisions are implemented and, in many ways, can be thought about as opportunity-based diversification.

It may come as a surprise to many but failing to diversify when a portfolio makes decisions can lead to significant dispersion in terminal wealth, even for something as simple as a 60/40 portfolio.

The example I gave at the Alpha Architect conference was a simple 60/40 strategy that was rebalanced annually.  The only variable between implementations was when the rebalance occurred.

I specifically highlighted 2008 and 2009, as an investor who rebalanced at the end of February was able to rebalance back to a 60/40 almost at the exact lows in 2009.  Compared to an investor who rebalanced at the end of August, there was a 7 percentage point dispersion in performance. 

The problem is that this dispersion is not mean reverting.  In other words, we do not expect the August portfolio to make up that gap.  The choice of which month to rebalance, then, represents a potentially non-trivial impact on results.

The same risk can be found in systematic equity strategies as well.  For example, Blitz, van der Grient, and van Vliet (2010)[1] found that a fundamental index rebalanced every March outperformed a fundamental index rebalanced every September by over 10 percentage points in 2009, despite being identically managed in process.

This is not a crisis-driven phenomenon only, either.  Post-2008, many tactical strategies have suffered timing-luck-driven whipsaw due to their end-of-month rebalance schedule.

Fortunately, the solution to managing timing luck is fairly trivial.  We can simply break our portfolio into equal pieces and rebalance each piece at a different point in time in a round-robin fashion.  For example, with our 60/40 portfolio, we might rebalance 1/12th of our capital in January, 1/12th in February, et cetera.  This process is called “overlapping portfolios” or “tranching” and has the effect of diversifying our decision making over time.

Unfortunately, very few firms have adopted this approach.  Worse, as far as I am aware, no major index provider has.  This means that there is a huge amount of random luck embedded in benchmark results.  With the magnitude of dispersion that timing luck can create, it’s entirely possible that we are hiring and firing managers based upon luck alone!

For those interested in a deeper dive, I co-authored a paper titled Rebalance Timing Luck: The Difference between Hired and Fired that was recently published in the Journal of Index Investing.

Jack: Another risk that can affect both factor and traditional portfolios is industry concentration risk. As a factor investor, it is tempting to want to overweight specific industries and sectors when the factor you are using identifies many stocks as attractive within that sector or industry simultaneously. But that introduces additional risk into a portfolio that may or may not be compensated. How do you think investors should look at striking a balance between building a portfolio with maximum exposure to a particular factor vs. taking excessive industry concentration risk?

Corey: I always try to keep in mind that quantitative models are always wrong.  No CEO would say that price-to-earnings, price-to-book, or EV-to-EBITDA accurately captures the full scope of their valuation.  A simple metric is too imprecise to capture the nuance of each and every business.

But that imprecision is a feature, not a bug.  The signals are designed to be blunt instruments that can be leveraged across securities and time.  So, while price-to-earnings may not give an accurate picture for an individual security, it may give us a directionally accurate roadmap for sorting securities based on cheapness.

With that in mind, when it comes to taking big industry bets, I think we need to be careful.  If we know that our model is wrong, what does it mean for the type of risk we should be willing to bear?

I think it is also worth pointing out that there is a difference here for different styles.  A naïve cross-market sort on value, for example, tends to structurally overweight financials and underweight technology companies.  Do we really think such a tilt is a long-term, compensated bet?  Or is it more likely to be a byproduct of a blind spot in our model?

On the other hand, cross-market momentum tends to create cyclical over- and under-weights that have historically improved results.  This might be due to the fact that momentum is a fast-decaying signal and stocks in the same industry group tend to move together as they are affected by the same risk factors.

On the whole, I tend to start with the null hypothesis that the market is right and my model is wrong.  With that perspective, it’s only at very extreme readings – to paraphrase Cliff Asness, “the 150th percentile” – that I think really significant tilts should be considered.

Jack: One of the long-term risks in factor investing that can be very hard to quantify is the risk that a specific factor that has worked over the long-term will stop working. Price/Book is an excellent current example. The Price/Book has substantial long-term evidence to support it, but it has struggled mightily in the past decade. It also is likely the most widely followed value factor, with large firms like DFA using it extensively and Russell using it to construct its indexes. So with Price/Book you have a value metric that is widely followed and struggling, which leads many to ask whether it no longer works. You wrote an excellent piece that illustrates the difficulty of analyzing a situation like this. Your analysis showed that the length of time it would take to determine that a factor like Price/Book no longer works based on its performance is likely longer than the investment horizon of most investors. Given that performance can’t provide the answer, what do you think the best approach is to analyze whether a historically successful investment strategy is no longer effective?

Corey: As quants, I think we’ve backed ourselves into a tough corner.  Our investment theses are based upon the long-term, cross-geographic, and cross-asset efficacy of our signals.  When traditionally measured, all this evidence makes the signals quite statistically robust.

All that supporting evidence means that decades of evidence will likely be required before a style is statistically rejected.  The size premium, for example, took 35+ years after it was first published to lose its 1% significance threshold. 

This is a problem for quants, as we might develop a qualitative answer as to why a signal should stop working long before the statistics bear out.  This means we must answer the question, “when does statistically significant apply and when does it not?”  How can we use it as a justification in one place and completely ignore it in another?  The tools we use to establish and defend factors may prevent us from tearing them down.

That does not mean we cannot establish quantitative evidence as to why certain signals may not longer work.  For example, I think the O’Shaughnessy Asset Management team wrote a fantastic, evidence-based piece on how price-to-book has changed over time.  While we may not be able to tear down the price-to-book factor using traditional performance-based statistical tests, we can still rely on deep, evidence-based research to try to overcome that hurdle.

Jack: Sequence risk is another risk that many investors don’t consider. It is widely assumed that if two investment strategies have the same series of annual returns that the order of those returns doesn’t matter. But in reality, the order can have a huge impact on long-term investment outcomes. I was wondering if you could talk about the impact of sequence risk and if you think there are any strategies investors can employ to combat it. 

Corey: On paper, it is easy to prove that the order of returns does not impact the annualized return achieved by an investor.  When investors make contributions and withdrawals, however, the order of returns has a significant impact upon their realized wealth level.  For example, large drawdowns in the early years of an investor’s retirement can permanently impair the lifestyle they can comfortably manage.

This is one of the primary reasons why investors tend to implement a glidepath that de-risks their portfolio as they get older.  Sequence risk tends to peak around retirement years (as investors change from accumulation to decumulation) and so it makes sense to reduce drawdown risk at that point by increasing our allocation to more stable securities like high quality, short-term fixed income.

The glidepath is just one solution, however.  Another solution might be to retain a higher equity allocation, but implement a defensive mandate (e.g. high quality, low-volatility stocks).

Or an investor might implement a dynamic beta strategy, which attempts to increase the portfolio’s allocation to equities during risk-on environments and de-risk during risk-off environments.  This is the type of strategy that my firm has specialized in over the last decade.

Each approach has its own pros and cons and therefore it can make sense for investors to diversify their diversifiers.

Jack:  We have discussed some risks in this interview that many investors reading it likely have never thought of before. Since you spend a significant amount of time looking at risks that others may ignore, I am wondering if you can think of any other risks that investors often ignore, but that can have a significant effect on long-term outcomes.

Corey: A lot of investing research tends to focus on the shiny objects: exciting new signals for generating alpha.  In practice, however, portfolios are subject to the tyranny of small decisions. 

Let’s say we want to build a portfolio based upon momentum.  We have to decide which momentum measure to use, the lookback period to evaluate it over, how frequently we will rebalance, how concentrated we want the portfolio to be, and the weighting scheme we will employ.  If there are 10 possible choices for each of these dimensions, we’d have 100,000 possible strategy combinations. 

In one study, we found that the variation across momentum strategy implementations was greater than the variation across factors.  In other words, the performance dispersion across momentum strategy implementations tended to be larger than the dispersion between value and momentum. 

We witness this dispersion in practice, too.  Consider that the spread in year-to-date active returns of different momentum ETFs exceeds 8 percentage points. 

These differences are neither persistent nor are they mean reverting, so we see significant dispersion in the terminal wealth achieved by the choice of strategy.

Unless an investor has a particular view as to why one implementation is necessarily better than another, diversification can again prove useful

Jack: Thank you again for taking the time to talk to us today. If investors want to find out more about you and Newfound Research, where are the best places to go?

Corey: The pleasure was mine – thank you!

Investors can learn more Newfound Research on our website.  They can also find our research on our blog, listen to our podcast, or follow me on Twitter.


[1] Blitz, D., van der Grient, B., and van Vliet, P.  (2010).  “Fundamental Indexation: Rebalancing Assumptions and Performance,” Journal of Index Investing, Vol. 1, No. 2, 82-88.

Photo: Credit: 123rf/nomadsoul1


Jack Forehand is Co-Founder and President at Validea Capital. He is also a partner at Validea.com and co-authored “The Guru Investor: How to Beat the Market Using History’s Best Investment Strategies”. Jack holds the Chartered Financial Analyst designation from the CFA Institute. Follow him on Twitter at @practicalquant.