Creating the Ideal Benchmark: Freeze Your Adviser

    by Mark Hulbert

    Has your adviser beaten his performance benchmark, or hasn’t he? You’d think that the answer to this question would be straightforward. But you’d be wrong. That’s because standard performance comparisons raise a surprising number of complex issues. Should the benchmark against which he is compared include all market sectors, or only some of them? Should the weight that each security has in that benchmark be a function of its market capitalization, or should some other allocation model be used?

    These are just two of the thorny methodological questions that arise when trying to determine whether a particular adviser has actually added value to a simple buy-and-hold approach.

    It is in part because of these complexities that advisers often are able to wriggle out from underneath what otherwise would be devastating to them—their failure to beat a given benchmark. The advisers simply can argue that, compared to “their own” ideal benchmark, they would have compared more favorably.

    In this article, I explore a different approach to constructing each adviser’s benchmark that cuts through many of these complexities. Under this approach, the signs are unambiguous.

    Unfortunately, however, the majority of investment newsletter editors fail to beat the benchmarks that I construct.

    Advisers vs. Themselves

    Figure 1.
    Newsletters vs.
    Individual Benchmarks

    My alternate benchmarking approach involves the comparison of each adviser’s performance to the returns of a portfolio that contains whatever he was recommending at the beginning of the year. Unlike the adviser’s actual portfolio, which will buy and sell securities at various frequencies, the benchmark portfolio will undertake no trades during the year. An adviser therefore compares favorably to his benchmark only to the extent that, through his trading, he can perform better than he would have performed by doing nothing other than hold his beginning-year portfolio.

    An example can help to make this clearer. Consider the model portfolio of a newsletter called the Wall Street Digest. According to the Hulbert Financial Digest’s calculations, this portfolio for the year-to-date through July 31 lost 16.8%. In contrast, the newsletter would have lost just 2.9% if it had simply held its portfolio that was in place at the beginning of the year and undertook no transactions.

    I performed similar calculations for each of the other 500+ portfolios on the Hulbert Financial Digest’s monitored list. If, for any reason, a portfolio’s security stopped trading during the first seven months of the year, I calculated that portfolio’s return by valuing that security as of the last price at which it did trade.

    It turns out that the experience of Wall Street Digest is more the rule rather than the exception. All told, 61% of the newsletter portfolios would have been better off on July 31 had they undertaken no transactions this year.

    The overall results: The average actual year-to-date return through July 31 of the 500+ portfolios the Hulbert Financial Digest monitors was 2.7%. Had all those portfolios undertaken no transactions and instead simply held in place whatever they owned at the beginning of the year, their average gain through July 31 would have been 3.0%.

    Note that the Hulbert Financial Digest’s transactions do not take taxes into account. However, if taxes had been considered, the newsletters’ benchmark portfolios would have compared even more favorably to their actual ones.

    The reason this approach to performance comparison avoids many of the complexities otherwise associated with performance analysis is that we are not comparing each adviser to an abstract benchmark that may or may not be appropriate. Instead we are comparing him to himself, absent the trading. This makes perfect sense because the implicit claim an adviser is making when selling a security that he already holds in his portfolio, or when buying a new one, is that the portfolio will be better off if it undertakes that trade.

    Are these results specific only to this particular time period?

    No. It turns out that the results reached for the first seven months of 2006 are not unique. In any of a number of years extending back to the mid-1980s, I have calculated similar benchmarks for newsletters’ returns. In each of the half-dozen years I studied the result was the same: On average, newsletters would have done better had they undertaken no trading.

    Return Gaps

    A recent academic study applied a similar concept to mutual funds. The study, “Unobserved Actions of Mutual Funds,” was conducted by three finance professors: Marcin Kacperczyk of the University of British Columbia and Clemens Sialm and Lu Zheng of the University of Michigan.

    The professors gave the name “return gap” to the difference between a mutual fund’s actual performance and what its performance would have been had it simply stuck with whatever it was recommending as of the date of its most recent disclosure of its holdings. A positive return gap meant that the fund had beaten its benchmark, while a negative return gap meant that it had lagged. To illustrate how good a job return gaps do of identifying advisers with market-beating abilities, they constructed two hypothetical portfolios. The first contained the 10% of funds that, over the trailing year, had the largest and most consistent return gaps. The second contained those with the most negative return gaps. From the beginning of 1985 through the end of 2003, the first outperformed the broad stock market by 3.8% per year (annualized), while the second performed 4.4% worse. These results suggest that return gaps are a superior way of identifying advisers with genuine ability.


    The Hulbert Financial Digest does not calculate newsletters’ return gaps on an on-going basis, and the professors’ study of mutual fund returns gaps ends in 2003. So there is not, as of yet, any easy way to identify which newsletters or funds consistently have the highest return gaps. But without too much legwork, you probably can determine if the newsletter you’re considering has a positive or negative return gap. Ask the newsletter’s publisher for a copy of its beginning-year issue and enter the portfolio it recommended into an on-line portfolio tracking service, provided free of charge at any of a number of Web sites ( provides such a feature, for example). Then compare the return of this portfolio you have entered with the actual returns as reported by the Hulbert Financial Digest.

    Needless to say, return gaps are not the only factor you should take into account when choosing a newsletter.

    But the professors’ research into mutual funds’ return gaps certainly suggests that it is at least one of the factors to which you should play close attention.

    Mark Hulbert is editor of the Hulbert Financial Digest, a newsletter that ranks the performance of investment advisory newsletters. It is published monthly and is located at 5051B Backlick Rd., Annandale, Va. 22003; 703/750-9060; This column appears quarterly and is copyrighted by HFD and AAII..

→ Mark Hulbert