Five Questions: Tackling Some of the Toughest Questions in Investing with Michael Mauboussin

By Jack Forehand, CFA (@practicalquant)

There are many skills that can benefit you as an investor. Being smart is certainly an asset, although it can also get you in trouble if you don’t know your limitations. The ability to control your emotions is also a huge plus. But I think the most important skill may be the ability to think critically. Having a process you use to view the investing world through and following that process through the myriad of problems you will face as an investor is crucial to success.

I can’t think of anyone I have seen in my investing career who is better at developing thoughtful frameworks to tackle challenging problems than Michael Mauboussin. Michael is the Director of Research at BlueMountain Capital Management. His research notes are must reads for anyone who wants to better understand investing. And his books provide an excellent roadmap for analyzing the complex, adaptive systems that characterize today’s markets.

So it goes without saying that I was really excited when he agreed to do our Five Questions interview. I also wanted to take advantage of the opportunity, so we cover some of the more difficult topics facing investors today like the active vs passive debate, the decline in available alpha and the future of value investing.


Jack: Generating alpha has always been hard in markets. But we seem to be in a period where the level of difficulty is rising exponentially. We are also in the midst of an extended period of falling fees. You have shown in your work that the total alpha available to investment managers has been falling. You have also shown that the common belief that beating the market is easier as the number of active managers declines is false since the managers that remain are typically the better ones. Not only is the job of beating the market getting harder, but the fees managers get paid to attempt it are also falling. I was wondering if you had any insights with respect to how these trends will play out going forward. Do you think we will reach a point where falling alpha and fees will level out and an equilibrium will be reached?

Michael:  Hi, Jack. Great to have the opportunity to speak with you.

This is a fascinating and relevant question. It might be best to break it down and consider various aspects.

The first point is that net alpha, before fees, is zero by definition. That means for every winner there has to be a loser. The first item you want to consider is what the ecosystem of investors looks like. I have often used the metaphor of a poker table: for you to win money, someone across the table has to lose. So it’s crucial to think about why you believe you’re one of the more skilled players. We wrote a piece about this called, “Who Is On the Other Side?” 

The next issue is the idea that it would appear easier to beat the market as more investors allocate to index funds and other rules-based strategies. I don’t think that’s the case. The reason is I believe a lot of the least sophisticated investors are those who are indexing. It’s like the weak player failing to show up at your poker game. All of a sudden, you’re playing against only other sharp players. Your life just got a lot harder, not easier. This played out during the dot-com boom. When sophisticated institutions were competing against relatively unsophisticated individuals, the alpha for institutions rose sharply. But once the individuals fled the market following the bursting of the bubble, alpha again became more scarce.

Finally, the question about an equilibrium between active and passive is both fascinating and tricky. My starting point is the paper by Grossman and Stiglitz called, “On the Impossibility of Informationally Efficient Markets,” which was published in 1980. The basic argument is that markets cannot be informationally efficient because there’s a cost to gathering information and reflecting it in prices. So there always has to be some benefit of excess return to lure investors to assume the cost of active management. You might think of the absolute value of alpha as the benefit and fees as the cost. You could argue that benefit and cost should be in rough balance.

Our analysis of large capitalization mutual fund managers shows that the standard deviation of alpha has been coming down for decades—again, with that exception during the dot-com era. Part of that is likely the result of lower volatility and part of it due to heightened competition. So the benefit has come down. 

If you believe the benefit/cost story, the benefit in the form of alpha has come down and hence fees should follow. And that is largely what has happened. But the logic tells you that the move away from active can’t go on forever because there really is a cost to gathering information and reflecting it in prices. So where that equilibrium is, exactly, is hard to say.

Another model that I believe bears on this discussion is that of Berk and Green as they describe in their paper, “Mutual Fund Flows and Performance in Rational Markets.” There are lots of interesting arguments in the paper, but the feature I focus on is the idea of dollar value extraction from the market.

It’s not the percentage of alpha that matters, it’s the percentage times the assets under management. They look at value added as pre-free alpha times assets under management. A portfolio manager running $100 million who has 50 basis points in pre-fee alpha extracts only $500,000 from the market, whereas the portfolio manager running $1 billion who has 10 basis points in pre-fee alpha extracts $1 million.

The cumulative added value from active equity managers roughly equals fees over time, but it exceeds fees in fixed income. The Berk and Green model is another way to quantify the tradeoff that Grossman and Stiglitz describe.

All that said, investors should always be mindful that market efficiency falls along a continuum. Some markets are highly efficient and others are much less so. However, where you find market inefficiency you almost always find costs—costs to gather information, trade, or to get too big.

Jack: The continued rise of passive investing has triggered significant debate within the investment community with respect to whether it distorts market prices. On one hand, if money is flowing disproportionately into the largest stocks without consideration for their fundamentals, you could argue that it could cause their stock prices to become decoupled from their true values. But on the other, if this situation did occur, one would expect active managers to step in and rectify any distortion. Do you think the concerns about passive investing distorting stock prices are valid?

Michael: This is a really vital question that I don’t know how to answer satisfactorily.

On the one hand, there is evidence that indexing has created some inefficiencies. For example, passive funds have to trade because of rebalancing and other issues, and there are opportunities for active managers on the other side of those trades. Further, there is evidence that indexing can have an impact on valuation and liquidity.  

I would also make the observation that the substantial flows into index funds and the success of the growth factor have gone hand-in-hand. I don’t know that the relationship is causal, but the relationship between “expensive” and “cheap” stocks is toward the high end of historical ranges.

Yet, it doesn’t appear that it has gotten a lot easier to invest actively.

Ultimately, if I were to make the case for indexing distorting prices, I would turn to an argument based on the wisdom of crowds. Crowds tend to come to an efficient solution when there is heterogeneity of the investor population. If all of the investors behave the same way, the wisdom of crowds can yield to the madness of crowds. I don’t know that we are there, but that is what I would monitor. 

Jack: Your book Expectations Investing offers one of the best explanations I have seen for how markets work. In the book, you talked about how an investor should look at the expectations embedded in a stock’s price and judge that relative to their view of reality in order to identify opportunities. For those who believe in the behavioral argument for things like value and momentum, the fact that expectations can sometimes become separated from reality is crucial to their continued long-term success. But I am wondering if the rise of data and computing power has made the market more efficient and this gap narrower. Do you think the gap between expectations and reality has become more difficult for investors to exploit? Do you think that poses a problem for strategies like value and momentum that rely on the separation between expectations and reality?

Michael: Thanks for this question. My first reaction is that while we wrote about that nearly 20 years ago, and closer to 35 years if you consider Al Rappaport’s book Creating Shareholder Value, it’s remarkable how underutilized the technique remains. Investors seem very focused on what they think a business is worth rather than asking “what has to happen to justify today’s price?”

When you consider the usefulness of factors and how they might relate to expectations investing (EI), the main question is whether those factors reflect risk or a behavioral inefficiency. If it’s risk, then I’m not sure there’s much the EI approach can add. It would suggest the discount rate is off.

But if the factors work for behavioral reasons, then EI is a powerful tool. I should note that the value factor relies on negative feedback, or regression toward the mean, and the momentum factor relies on positive feedback, or repulsion from the mean. And these factors show up everywhere.

We can really separate two issues. The first is whether gaps between price and value arise. That is almost certainly true, albeit finding and taking advantage of those gaps is always challenging. The second issue is how you would approach the task. I am naturally very biased, but I remain convinced that EI is a powerful way to do it.

Jack:  You have been a big advocate of base rates. After watching my own failed attempts, and the attempts of others, to forecast what will happen during my investment career, I have definitely come around to using the outside view and relying on the past to inform my view of the future. One of the things I struggle with in using base rates is whether there are sometimes major events that render them much less useful than they have been in the past. To use value investing as an example, if you look at the base rates for what has happened historically when value stocks have struggled over a decade and when spreads between cheap and expensive stocks get to the levels they are now, you would be very optimistic about what the next decade might hold for value. But others argue that there have been major events that significantly limit the value of those base rates. For example, some argue that the invention of the microchip was a turning point that has permanently swung the pendulum toward growth companies and away from value. Others argue that the Fed’s use of quantitative easing and the lower rates that come with it will do the same. I was wondering if you could talk about how you view the use of base rates in light of these types of events and how you evaluate when the past is not a valid way to try to predict the future?  

Michael: You are absolutely right. The application of base rates is easier in some settings than in others. I have a few thoughts.

First, base rates in generally are vastly underutilized. You pointed to conditions where the use is more difficult, but there are lots of cases where they are relevant and are just not used. I think there are a couple of reasons for that. First is that people place a lot of weight on the information they gather and their own experience. So they are not inclined to even introduce the idea of a base rate. Second, people generally don’t have access to robust data on base rates. So even if they wanted to use the data, they are not at their fingertips. To address this, we assembled a useful repository of base rates for corporate performance.   

Next, there are some techniques that allow you to make headway even on questions that don’t easily lend themselves to analysis through base rates. Some of the best thinking I’ve seen on this is from Phil Tetlock. Phil’s work was summarized effectively by Shane Parrish at Farnam Street. My experience is that even asking people to think about particular outcomes using probabilities prompts improved thinking, discussion, and debate in organizations.   

As for macro topics, I’ve always been a fan of being macro aware but macro agnostic. That means that you want to model the repercussions of lots of outcomes without having to make a forecast of what exactly will happen.

The final thought is that people tend to be too optimistic, and base rates can temper that enthusiasm. The planning fallacy is a good example. When planning to achieve an objective, say repair a bridge, we tend to think it will happen sooner and at a lower cost than it ultimately does. Base rates help offset that misplaced optimism.

But optimism has a benefit: it motivates people to try things they wouldn’t otherwise try. Most of the great inventions came from optimists. So while we may want to check optimism through the use of base rates ourselves, there is little doubt that optimism is a good thing for society.     

Jack: The rise of computing power and the amount of data that is available to investors has led to a significant rise of the number of managers that utilize quantitative approaches. It has also led many to question the value of the human investment manager. Computers are certainly able to take in and process more data than humans can. And they also are free from the emotions and biases that can often derail human investors. But they haven’t gotten to the point where their ability to think is on par with what a human can do. What are your thoughts on the pros and cons of using a human manager relative to a quantitative system in today’s markets?

Michael: My take is that there are some tasks that humans are better suited to complete than machines and some tasks that can be algorithmic. But fundamental investors need to think more systematically, and systematic investors need to think more fundamentally.

I wrote a piece where I tried to identify the relative strengths of fundamental and systematic investors. I think that fundamental investors should evaluate their entire investment process, from idea generation to portfolio construction, and consider carefully what tasks may be better done systematically. One example that comes to mind is position sizing.

At the same time, it’s important to be mindful that the rules that systematic strategies devise are created and monitored by people. The tastes and biases of the researcher rarely get wrung out of a systematic process. In this way, all strategies rely on humans.

But this raises the biggest issues: humans have a hard time sticking to a process when confronted with poor results, even when those results are relatively short term in nature. Examples of this abound, including in the worlds of sports and business. That’s where a systematic approach can be a huge benefit. It allows the process to play out.  

Jack: Thank you again for taking the time to talk to us today. If investors want to follow your work, where are the best places for them to go?

Michael: My pleasure!

You can follow me on Twitter @mjmauboussin or go to my website, http://michaelmauboussin.com/.

Copyright: Photo: 123rf.com / urfingus


Jack Forehand is Co-Founder and President at Validea Capital. He is also a partner at Validea.com and co-authored “The Guru Investor: How to Beat the Market Using History’s Best Investment Strategies”. Jack holds the Chartered Financial Analyst designation from the CFA Institute. Follow him on Twitter at @practicalquant.