The Case for Systematic Decision-Making
“If you do fundamental trading one morning you feel like a genius, the next day you feel like an idiot….by 1998 I decided we would go 100% models…we slavishly follow the model. You do whatever it [the model] says no matter how smart or dumb you think it is. And that turned out to be a wonderful business.”
This quote, from Jim Simons, founder of the world’s most successful hedge fund, Renaissance Technologies, demonstrated the utility of systematic decision-making in an MIT video.
The urge to use our judgment throughout the investing process is strong. I argue that, while investors need human experts to design models, they should let computers be in charge of applying those models and fight the urge to use their judgment in the implementation process. “Gut-based,” or discretionary, stock pickers certainly have a compelling story: Invest countless hours in research, identify investment opportunities and profit from the hard work. Stock pickers, however, rely on the false premise that “countless hours of being busy” adds value in the context of investment management. The empirical evidence on the subject of systematic versus discretionary decision-making is abundantly clear: Models beat experts. In fact, the late Paul Meehl, one of the great minds in the field of psychology, describes the body of evidence on the “models versus experts” debate as the only controversy in social science with “such a large body of qualitatively diverse studies coming out so uniformly in the same direction.”
Econs and Humans
University of Chicago professors Dick Thaler and Cass Sunstein in their bestseller book “Nudge: Improving Decisions About Health, Wealth, and Happiness” (Yale University Press, 2008) describe two types of people that can be found in the world: econs and humans. Econs are fully rational, continuously calculating and have both unlimited attention and mental resources. Humans are a decidedly less rational and more emotionally driven bunch. This view is based on an understanding of two ways of thinking that are innate to humans. As described in Daniel Kahneman’s great work “Thinking, Fast and Slow” (Farrar, Straus and Giroux, 2011), humans are driven by two modes of thinking: System 1 and System 2. System 1 decisions are instinctual and automated by the brain; System 2 processes are rational and analytical.
System 1, while imperfect, is highly efficient. For example, if Joe is facing the threat of a large tiger charging him at full speed, System 1 will trigger Joe to turn around and sprint for the nearest tree, and ask questions later. As an alternative, Joe’s System 2 will calculate the speed of the tiger’s approach and assess his situation. Joe will examine his options and realize that he has a loaded revolver that can take the tiger down in an instant.
On average, if Joe immediately sprints to the tree he may get lucky and outrun the tiger. If, on the other hand, Joe pauses and calculates his best option, which is shooting at the tiger with his revolver, his tactical pause may end with Joe trying to remove a 500-pound meat-eating monster from his jugular vein.
Joe’s tiger situation highlights why evolution has created System 1: On average, running for the tree is a life-saving decision when faced with a high-stress situation where survival is on the line. The issue with System 1 is that heuristic-based mechanisms often lead to systematic bias: Joe will almost always run, even when sometimes he should shoot. System 1 certainly served its purpose when humans were faced with life and death situations in the jungles, but in modern day life, where decisions in chaos have limited consequence,
the benefits of immediate decisions rarely outweigh the costs of flawed decision-making. The necessity of avoiding System 1 and relying on System 2 in the context of financial markets is of utmost importance.
Perception Is Not Reality
Ted Adelson, a vision scientist at MIT, has developed an illusion that highlights the fallibility of the human brain. This illusion is shown as Figure 1.
Stare at cells A and B in Figure 1. Do the colors of the squares look different? How confident are you that A is a different color than B? What odds would you accept in a bet? 5-1? 20-1? If you are a human, you should be confident that A and B are different. However, if you are an econ, your computer-like brain will identify a pixel in cell A and B, compare the red-green-blue values and identify that each is 120-120-120, a perfect match. Stare a little longer, but this time cut pieces of paper to create a small box around cells A and B. Now it should be clear: A and B are the same. The lesson here, and its applicability to decision making, is best described by Mark Twain, “It ain’t what you don’t know that gets you into trouble, it’s what you know for sure that simply ain’t so.” As investors, we need to be most wary of situations where “we know” something is bound to happen.
The Evidence Speaks: Models Beat Experts
The illusion in Figure 1 is simply meant to highlight that we can become overconfident based on first impressions. But how does a simple trick map into a broader claim that humans are irrational and thus poor discretionary decision-makers? For this endeavor, I stand on the shoulders of academic researchers who have spent their lives addressing this question.
The researchers then took their analysis one step further. They wanted to explore what would happen when the experts were armed with a powerful prediction model. A natural hypothesis is that experts combined with models can outperform the stand-alone model. In other words, models represent a floor on performance—to which experts can add incremental value—and not a ceiling. In follow-on tests, the researchers gave the clinicians the output of the model and disclosed that the model has “previously demonstrated high predictive validity in identifying the presence or absence of intellectual deterioration associated with brain damage.” Experienced clinicians significantly improved their accuracy ratio from 58.3% to 75% and the inexperienced clinicians moved from 62.5% to 66.5%. Nonetheless, the experts were still unable to outperform the stand-alone model, which had an 83.3% accuracy rate.
This study suggests that models represent a ceiling on performance, not a floor. Why? Models are built by humans when they are in the System 2 rational mode of thinking. The models are then implemented in a systematic way, devoid of System 1 bias. In contrast, human experts develop an internal model and then implement their thesis in a discretionary way. Unfortunately, discretionary decision-makers are unable to deflect bias from System 1, which detracts from their ability to beat a systematic process.
But Discretionary Investors Beat Simple Models, Right?
One might argue that the clinicians in the Leli and Filskov (1984) study were subpar and perhaps the study design was flawed. Expert stock pickers have access to much better quantitative tools and can develop soft or qualitative information edges. Stock pickers can’t possibly be beaten by simple models, can they?
Joel Greenblatt, famous for his bestselling books “You Can Be a Stock Market Genius Even If You’re Not Too Smart” (Simon and Schuster, 1997) and “The Little Book that Beats the Market” (John Wiley & Sons, 2006) stumbled into a natural experiment he discussed on Morningstar. Joel’s firm, Formula Investing, utilizes a simple algorithm that buys firms that rank highly on an average of their cheapness and their quality. A quantitative Warren Buffett, if you will. The firm offers investors separately managed accounts ( ) and investors have a choice: They can simply follow the model and purchase all the names suggested by the model, or they get a list of the model’s outputs, but have the ability to use their discretion in making individual stock picks. Joel collected data on all their separately managed accounts from May 2009 through April 2011 and tabulated the results. I’ve presented the results in Figure 3.