Close

    Recipe for Picking Winners: Add Time to a Pinch of the Past

    by Mark Hulbert

    My nearly 27 years of tracking investment newsletters’ performances has been premised on the belief that past performance can help us make more profitable choices between various investment advisers. This column supplies yet more evidence in support of that notion. To be sure, as the Securities and Exchange Commission requires all investment advertising to acknowledge, past performance is not a guarantee of future performance. But, by the same token, only the most extreme nihilists would insist that past performance has no relationship whatever to future returns.

    For this column, I devised a test to see where we stand in between these two extremes. My test is based on a thought experiment that winds the clock back to October 2002: How would newsletters have done over the nearly five years since then if they had been chosen according to their past performance?

    I chose October 2002 because it represented a major market turning point, and junctures as big as that constitute particularly tough challenges for systems based on past performance.

    That month, of course, saw the bottom of the severe 2000–2002 bear market, during which the Dow Jones industrial average shed 38% and the NASDAQ Composite index lost a stunning 78%. Choosing an adviser in a month such as that on the basis of past performance, therefore, ran the big risk of extrapolating the past into the future at the very point that the future was about to look completely different than the past.

    The HFD Study

    In designing my study, I focused on the 140 or so newsletters that the Hulbert Financial Digest (HFD) was tracking in October 2002. I ranked all those newsletters for performance over six different periods through September 30, 2002:

    • One year,
    • Three years,
    • Five years,
    • Eight years,
    • 10 years, and
    • 15 years.

    For each time period, I ranked newsletters according to two different ways of measuring performance: Unadjusted returns, and risk-adjusted returns that measure the return per unit of risk undertaken by the portfolio. [The risk-adjusted performance measure used was the Sharpe ratio, which takes the return above a risk-free Treasury bill and divides it by the portfolio’s standard deviation, a measure of volatility.]

    In the event the HFD tracked more than one portfolio for a given newsletter, its placement in those rankings was based on an average of its several tracked portfolios.

    Figure 1.
    How Well Past
    Performance Predicts
    the Future
    CLICK ON IMAGE TO
    SEE FULL SIZE.

    My next step was to take, from each of these dozen rankings, the 10 newsletters with the best returns and the 10 with the worst records. I then calculated how these groups of 10 did from the beginning of October 2002 until the end of this past April 2007.

    In some cases, of course, the HFD was unable to calculate a continuous track record for the newsletter over this four-and-one-half-year period; not including them in the test would have introduced a survivorship bias into my results. To eliminate such a bias, I assumed that discontinued newsletters earned, after their discontinuation, the average return among all HFD-tracked newsletters.

    All track records in the study were those calculated by the Hulbert Financial Digest, utilizing its standard methodology for tracking investment newsletter performance. Noteworthy features of that methodology include: All transactions are executed on those days that anonymous subscribers are able to act on the newsletters’ advice; and transaction costs, such as discount brokerage commissions and bid-asked spreads, are taken into account.

    The results are summarized in Figure 1.

    The Results: Long-Term Is Better

    Two major findings emerge.

    The first finding is that past performance does a far better job of identifying good future performers when it is measured over long periods. Notice, for example, that one-year and three-year track records through 9/30/2002 did an exceedingly poor job of identifying subsequent winners and losers.

    For starters, the newsletters with the best past performances came nowhere close, on average, to the return of the Dow Jones Wilshire 5000 index. But, adding insult to injury, the newsletters with the best past returns ended up performing far more poorly than the newsletters with the worst past returns.

    When past performance was measured over periods as long as eight to 10 years, in contrast, the past was a very helpful guide to the future. Not only did the top 10 for past performance proceed to do a lot better, on average, than the bottom 10, the top 10 also equaled or outperformed the stock market itself.

    The second major pattern that emerged from my study was that risk adjustment is most important when performance is measured over short periods of time.

    Consider, for example, the results based on one year through September 30, 2002. When newsletters were ranked based on unadjusted returns, the worst 10 did 7.1% per year better than the best 10 over the period from then until the end of this past April.

    In contrast, when newsletters were ranked on the basis of risk-adjusted performance over this one-year period, the worst 10 did just 3.9% per year better than the best 10.

    To be sure, in both cases, focusing on one-year returns was a bad idea. But risk-adjusted one-year returns weren’t as poor a guide to the future as unadjusted one-year returns.

    The situation was the reverse when the rankings were based on 15-year returns. In that case, the unadjusted rankings did a slightly better job of separating subsequent winners and losers than did the risk-adjusted rankings.

    Why isn’t risk adjustment as important for longer periods?

    The reason, I suspect, is that time is itself a good risk-adjuster. When performance is measured over a long enough period, after all, then it will have encompassed most likely outcomes. There is less need in that event to adjust for risk.

    But when performance is measured over short periods, then it becomes quite likely that the future will differ significantly from the past.

    Time Matters

    These results are quite consistent with the results that have emerged from any of a number of other tests that I have conducted.

    Past performance, while not a guarantee by any means, is an important tool in our arsenal when choosing an investment adviser. However, investors should not make decisions on the basis of performance records of less than five years, and even then it is far better to focus on track records of eight to 10 years in length, or longer.

    But if you must, nevertheless, make a choice based on performance over periods of less than five years, set your focus on risk-adjusted returns.


    Mark Hulbert is editor of the Hulbert Financial Digest, a newsletter that ranks the performance of investment advisory newsletters. It is published monthly and is located at 5051B Backlick Rd., Annandale, Va. 22003; 703/750-9060; www.hulbertdigest.com. This column appears quarterly and is copyrighted by HFD and AAII.


→ Mark Hulbert