The Art of Streetplay

Sunday, December 25, 2005

Taking Another Look at Arnott (Why Not?)

As long-time readers know, I am interested in indexation. I have a few thoughts on Arnott’s Fundamental Indexation. Before diving into the improvements though, I thought it might be of value to take a closer look at the theoretical underpinnings of his rationale, which I break up into a few parts.

I'd break things down to two claims. One claim is that the S&P is inefficient because of cap weighting and the other is that Fundamental Indexing can do a better job. They seem to be theoretically somewhat orthogonal so this could help flesh things out. In the interim, I throw out some implications and a test I’d be interested to see.

As usual if anyone has any feedback I would be highly interested to hear it. This is one of the more technical posts as a word of warning.

The Inefficiency Claim

Inefficiency is pretty clear. As I see it, it's due to the fact that deviations from intrinsic value, net-net, tend to have zero expected value in terms of returns and mean revert.

Assume that all stocks have some deviation which is due to intrinsic value and another due to idiosyncratic noise. Hypothetically if I know a priori the future evolution of the changes in intrinsic value of all stocks, and I were to net all stock prices by my perfect estimates of intrinsic value, I would be left with a set of residuals whose returns should have zero mean and a mean reverting tendency. If deviations are comparable in terms of returns and not dollar value, then small caps and large caps are equally likely to deviate by, say, 1% from intrinsic. In reality this might not be exactly the case but it is within a reasonable level I would expect. However the dollar value impact of the deviation will be much larger for the large cap relative to the small cap. On a period by period basis then, if I were to invest as if I were the S&P, I would systematically emphasize fluctuations of large cap stocks more than small cap stocks-- and rightly so if the variation were due to intrinsic value shifts. But if one were to run the simulation mentioned above, one would see that if all stocks' prices were initially set to intrinsic value, the idiosyncratic variations force the market to over-emphasize the fluctuations of the stocks with the positive idiosyncratic residuals relative to a market which fluctuates entirely off of changes in intrinsic value. The mean reverting property of the idiosyncratic noise is then the killer, as it probabilistically speaking puts some drag on the stocks with the over-emphasis. Thus, the problem.

Is there a flaw in that logic?

The Implications of S&P Inefficiency
If the S&P is indeed inefficient, there are quite a few consequences. "The market" is supposed to be mean variance efficient. We use it all the time in our finance courses as the basis behind the market risk premium. We use it to get our hands around the tradeoff between risk and expected return. All of this would basically be wrong. If the S&P is indeed inefficient, we might have to raise the hurdle rate of our projects by a couple hundred basis points.

Of course, it was wrong beforehand too. To be technical, the stock market is a pretty poor proxy for the real market—the whole economy, with a lot of very particular nuances (Zack, I’m sure you explain this 10x better than I can). This just means that even when representing the stock market, the S&P does a poor job.

The Improvement Claim
The second claim is that Fundamental Indexing can do better.

I can't be as confident but I guess the rationale from my point of view goes something along these lines. All stocks in the S&P are supposed to be weighted by their intrinsic values. But if one makes the assumption that stocks deviate from intrinsic, the argument above implies cap weighting, although it is a great proxy for company size, has problems. Why not try out other things which are proxies for company size which might not have the bias that cap weighting has? Income, for example, has a 95% return correlation with the S&P, almost as much capacity as the S&P, also tends to favor very large companies, and doesn't create marked deviant industry allocation. It doesn't take on much more small stock risk from Fama-French, and rebalancing schemes can bring turnover down to the level of the S&P itself. It definitely has more F-F "value" to it but it's not taking on more risk in terms of liquidity, interest rate regime or bull/bear market cycle. It's just trying to proxy for market size without bias, albeit with lower data resolution.

Tempering Expectations; Possible Improvement
While the above rationale is intuitively appealing, its improvement relative to the S&P is a function of the degree of mean reversion there is to the idiosyncratic noise. If “irrational” price movements take years to correct themselves, then attempts to trade this noise, while expected value positive, could take so long and suffer large enough drawdown that it could very well be unfeasible to trade on.

That being said, Arnott himself showed that historically, a fundamentally indexed portfolio outperforms by approximately 200 basis points—this is a sizable margin considering the large back-testing period he considered.

To take a closer look at the inefficiency, one can make a direct link between a fundamental metric and market cap. Take free cash flow (‘FCF’), for example, as our fundamental metric. Market cap (‘MC’) is simply FCF multiplied by MC/FCF, the FCF multiple. Looked at from this angle, the, the inefficiency implies mean reversion in the multiple-- MC/FCF for example. But he never does out the statistics from what I could see in his paper-- he simply turned to other stats which implied mean reversion somewhere. So I'm thinking he could be missing some alpha which could be gotten with a little additional complexity. If all companies are reduced to two numbers-- FCF and P/FCF for example-- then weighting entirely on FCF implies independence between FCF and the multiple on forward returns, right? But I would think that a company which does 50M in FCF on a 20 multiple has a different payoff profile than a similar company which does 50M on a 3 multiple. The multiple implies something about the quality of the underlying earnings, and quality isn’t picked up by FCF on a standalone basis. While Arnott's methodology would definitely reallocate towards the lower multiple company relative to the higher multiple one, it might still be giving too little credit to the 20 multiple, because the market seems to be saying there is something about that FCF which is more valuable to investors.

Has anyone seen a test done which buckets the market by FCF, then buckets again by multiple, creating a matrix of subgroupings, then populates that matrix with 1 year forward returns on a year by year basis? Collection of say 50 years of data would create a 3D matrix. With this one could test the claim that FCF and P/FCF are indeed independent of one another and see if there is any additional insight which could be gained.

Closing Thought (Thanks Mike!)-- Schema Theory
Mike over at TaylorTree posted a kind reference to a couple of my prior posts in one his last entries. I agree with him completely when he references the tradeoff between simplicity and complexity. I just thought I'd chip in with a few thoughts which come from the intriguing field of cognitive development... and my favorite theory of how we acquire knowledge, Schema Theory.

Under schema theory, knowledge takes the form of a multitude of 'schema', which, broadly speaking, are mental representations of what all instances of something have in common. As an example, my "house" schema represents what is common to all houses that I've been in. A house has parts, it's made of many things, it can be used for a variety of purposes, ... the list goes on. This is important because when I look at 1,000 houses, they aren't all completely different from eachother-- they have broad similarities which I have mental categories for with which I can compare the houses.

The transition from complex to simple and back to complex might at least partially be explained by how schema theory explains our learning process. Schema decompose complexity through categorization and abstraction. I'm not big on terms so I thought an example might make things a little more clear.

When dealing with new experiences, we have a tendency to treat them as new and different from what we've experienced in the past. For example, if someone were to throw me a ticker and have me look at its business, I would, at the onset, treat all new information I take in regarding the company as new. I would probably begin by gathering general information about the company-- business line, industry, margins, growth, etc. To a large extent, those data points I pick up, at least at the start, don't really have a place. They are just distinct facts. From a cognitive utilization point of view, this is really, really inefficient! I'm being forced to use all of the slots I've got up there in my brain just to digest all these little random tidbits of information!

What happens over time though is that linkages form. The high margins of the company make sense because they've been able to grow sales without any corresponding growth in assets, so much of the sales growth is simply going straight through to the bottom line. Assets aren't growing because their business does a remarkable job of flexing capacity. Their margins are staying up because of cost-related nuances. The magnitude of the sales growth is explainable by the geography the company resides in and the customers it does business with. All the facts-- the qualitative concepts and the hard numbers-- naturally fall into place, and instead of thinking of the company as 10,000 distinct data points all independent of one another (complexity), it is instead "the company" (total simplicity). All facts are entagled in an fact web which sticks so tightly to itself that they really are all one idea in your head. It goes from using all of our cognitive slots to one of them. And it does so by characterizing the company through the same analytical categories which were used to analyze the hundreds of other companies that have been looked at.

In this context it kind of makes sense that things naturally ebb and flow from simple to complex. We are constantly trying to expand our intellectual borders, learning new tools, new ways of looking at things... but at the same time we are naturally also doing some heavy duty simplification. Making things complicated and simple are the pillars of cognitive development, and something which can be optimized on.

1 Comments:

  • Dan,

    I always learn something new when reading your posts! Schema Theory is something I've never heard of...just experienced. :) Just might have to read up on this topic. Thanks again for sharing your insight!

    Schema Theory reminds me of a Bruce Lee quote:

    "In JKD, one does not accumulate but eliminate. It is not daily increase but daily decrease. The height of cultivation always runs to simplicity. Before I studied the art, a punch to me was just like a punch, a kick just like a kick. After I learned the art, a punch was no longer a punch, a kick no longer a kick. Now that I've understood the art, a punch is just like a punch, a kick just like a kick. The height of cultivation is really nothing special. It is merely simplicity; the ability to express the utmost with the minimum. It is the halfway cultivation that leads to ornamentation."

    Take care,

    MT

    By Blogger Mike Taylor, at 3:38 PM  

Post a Comment

<< Home