- Of the 13 Wall Street analysts following Enron a week before the roof fell in last November, according to Forbes magazine, 11 rated it a buy, one a hold and only one a sell.
- Of the eight newsletters on the Hulbert Financial Digest (HFD) monitored list that followed Enron, in contrast, no less than six sold it earlier in the year at much higher prices. One was even shorting it for a few months.
- At the beginning of April, according to I/B/E/S, a firm that tracks analyst recommendations, nine of the 17 Wall Street analysts who were then following the company rated the stock either buy or strong buy. None of these 17 rated the stock a sell and only one thought it would underperform the market.
- In contrast, of the nine newsletters tracked by the HFD that were recommending WorldCom for purchase at the beginning of the year, six either had downgraded it or sold it outright by the end of March. Furthermore, during the first quarter of this year an additional newsletter had recommended that WorldCom be sold short.
Analysts vs. Newsletters: Whose Recommendations Are Best?
by Mark Hulbert
But how bad of a record do analysts really have? How do they compare to investment newsletter editors, who as a general rule do not receive any pressure from investment banking departments?
At an anecdotal level, the case for investment newsletters would appear strong. For example, consider how Wall Street analysts and investment newsletters rated Enron in the months prior to its bankruptcy:
What Does Sell Mean?
Complicating any comparison between Wall Street analysts and newsletters is the fact that they do not necessarily use the same words to mean the same things.
Consider the word sell. Newsletter editors do not have any inhibitions against uttering this four-letter word and do not shy away from using it when they want to get rid of a stock.
But most analysts on Wall Street dread using the word, presumably for fear of alienating current or prospective investment banking clients. The worst that they can bring themselves ever to say is that a stock is a hold. Typically, more than half of all of the ratings given by Wall Street analysts are a buy or stronger.
But this bias against using this word doesnt necessarily mean that analysts ratings are worthless. After all, most investors are able to detect analysts true meaning when they downgrade a stock to a hold as that stock typically will plummet.
Conceivably, Wall Streets aversion to the word hold may be no more pernicious than the grade inflation that is ubiquitous in academia. For example, virtually no college student today is given a grade below C, just as no grocery store rates an egg to be smaller than large. But few are fooled: Most know that a C is a failing grade and that large eggs are the smallest being offered.
Clearly, it is necessary to dig a little deeper than relying on anecdotes, however devastating, or dismissing analysts because of their aversion to the word sell. There has, in fact, been extensive academic testing of analyst performance. Perhaps the most comprehensive study of analysts ratings was conducted by University of California (Davis) finance professor Brad Barber and three co-authors from the University of California (Berkeley) and Stanford University. Using data from First Call and Zacks Investment Research, both firms that track analyst recommendations, the professors studied more than half a million recommendations made between 1985 and 2000 by more than 4,000 brokerage firm analysts. They constructed a consensus recommendation for each stock based on the average advice of all analysts who followed it.
To get around analysts aversion to the word sell, the researchers focused on relative rather than absolute ratings. Even if an analyst never rated a stock a sell, he still would have rated some stocks as better bets than others. It is an empirical question whether the better-rated stocks perform better.
To find out whether this was indeed the case, Barber and his colleagues constructed five portfolios from their consensus recommendations. Portfolio 1 contained the approximately 20% of stocks whose consensus recommendations were the most favorable, while Portfolio 5 contained those stocks with the least favorable consensus ratings.
The results are impressive for the 14 years through the end of 1999. Portfolio 1 outperformed Portfolio 5 in each of those 14 years, and by an average of nearly 14 percentage points per year (before transaction costs) over the entire period.
However, calendar year 2000 was another story entirely. After 14 years in a row in which their top picks outperformed their pans, the analysts flunked miserably during 2000, with Portfolio 5, which contained their lowest-rated stocks, gaining 48.7% and Portfolio 1, which contained their highest-rated issues, losing 31.2%. Thats a spread of nearly 80 percentage points!
Over all 15 years, encompassing the 14 good years and the one outrageously horrible one, Portfolio 1 outperformed Portfolio 5 by an annualized average of 2.7%. Note, however, that this result does not take transaction costs into account.
The results for all 15 years are displayed in Figure 1, along with the performance of the average investment newsletter as reported by the Hulbert Financial Digest. To enhance comparability with the results of the analyst study, newsletters performances are reported after first subtracting the returns of the Wilshire 5000 index. Over this period, investment newsletters lagged the Wilshire by an annualized average of 4.5%. This figure, however, does take transaction costs into account—both commissions as well as bid-asked spreads.
In Whom Do You Trust?
One thing that can certainly be said of newsletters over analysts is that at least newsletters use plain English—a hold tends to mean a hold and not a sell recommendation. If you want to rely on analysts, it is critical that you measure their recommendations in relative terms.
But based on the data in Figure 1, I draw several conclusions.
First, it would be premature to conclude that analysts performance between 1986 and 2000 is statistically any different than that of newsletters. As noted above, the Barber study does not include transaction costs, while the HFD does.
Second, even if we ignore this differential treatment of commissions, it still is not clear that there is any statistical difference between these two performances, given the extraordinary volatility of the analysts performance.
Third, even though strict comparability does not exist between the two data series, broad trends nevertheless are evident. Notice, for example, that the best years for analysts—1996 through 1999—were the worst years for investment newsletters. By the same token, the worst year for analysts—2000—was one of the better years for newsletters.
These patterns lead me to a tentative hypothesis: The relative performance of newsletters and analysts is likely to be a function of the performance of large- and small-cap stocks.
Wall Street analysts tend to focus on much larger-cap stocks than the typical investment newsletter. Thus, when large-cap stocks are dominating the market, as they were in the late 1990s, analysts are likely to shine and newsletters are likely to lag. When this situation reverses itself, such as it did during 2000, then it is the newsletters turn to shine.
Regardless of whether the difference between analysts top- and bottom-rated stocks is statistically significant, it is of doubtful economical significance, since the Barber study did not take transaction costs into account. Still, the picture painted by that study is far better than that painted by the anecdotal evidence. For example, there appears to be no systematic evidence that you will beat the market by doing the opposite of the analyst consensus—Wall Streets poor current reputation notwithstanding.
At this broad level of generality, therefore, it is impossible to conclude that you should always pick newsletters over analysts, or vice versa.
Instead, recognize that each of these segments of the advisory industry represent broadly different approaches and perspectives.