Close

The Case for Systematic Decision-Making

by Wesley R. Gray Ph.D.

“If you do fundamental trading one morning you feel like a genius, the next day you feel like an idiot….by 1998 I decided we would go 100% models…we slavishly follow the model. You do whatever it [the model] says no matter how smart or dumb you think it is. And that turned out to be a wonderful business.”

This quote, from Jim Simons, founder of the world’s most successful hedge fund, Renaissance Technologies, demonstrated the utility of systematic decision-making in an MIT video.

In this article


<

About the author

Wesley R. Gray , Ph.D., is the founder and executive managing member of Alpha Architect. He is also an assistant professor of finance at Drexel University’s LeBow College of Business.
Wesley R. Gray Profile
All Articles by Wesley R. Gray

The urge to use our judgment throughout the investing process is strong. I argue that, while investors need human experts to design models, they should let computers be in charge of applying those models and fight the urge to use their judgment in the implementation process. “Gut-based,” or discretionary, stock pickers certainly have a compelling story: Invest countless hours in research, identify investment opportunities and profit from the hard work. Stock pickers, however, rely on the false premise that “countless hours of being busy” adds value in the context of investment management. The empirical evidence on the subject of systematic versus discretionary decision-making is abundantly clear: Models beat experts. In fact, the late Paul Meehl, one of the great minds in the field of psychology, describes the body of evidence on the “models versus experts” debate as the only controversy in social science with “such a large body of qualitatively diverse studies coming out so uniformly in the same direction.”

Econs and Humans

University of Chicago professors Dick Thaler and Cass Sunstein in their bestseller book “Nudge: Improving Decisions About Health, Wealth, and Happiness” (Yale University Press, 2008) describe two types of people that can be found in the world: econs and humans. Econs are fully rational, continuously calculating and have both unlimited attention and mental resources. Humans are a decidedly less rational and more emotionally driven bunch. This view is based on an understanding of two ways of thinking that are innate to humans. As described in Daniel Kahneman’s great work “Thinking, Fast and Slow” (Farrar, Straus and Giroux, 2011), humans are driven by two modes of thinking: System 1 and System 2. System 1 decisions are instinctual and automated by the brain; System 2 processes are rational and analytical.

Welcome to the American Association of Individual Investors.

Gain exclusive access to all of AAII.com, PLUS our market-beating Model Stock Portfolio — currently trouncing the S&P 4-to-1!

TRY US FREE for 30 DAYS!

AAII is a nonprofit association dedicated to investment education. For full access to our award-winning content, classrooms, model portfolios and stock screens, please take a moment to join AAII today for only $29 -- a 40% savings off our regular rate.
Join AAII Today!

System 1, while imperfect, is highly efficient. For example, if Joe is facing the threat of a large tiger charging him at full speed, System 1 will trigger Joe to turn around and sprint for the nearest tree, and ask questions later. As an alternative, Joe’s System 2 will calculate the speed of the tiger’s approach and assess his situation. Joe will examine his options and realize that he has a loaded revolver that can take the tiger down in an instant.

On average, if Joe immediately sprints to the tree he may get lucky and outrun the tiger. If, on the other hand, Joe pauses and calculates his best option, which is shooting at the tiger with his revolver, his tactical pause may end with Joe trying to remove a 500-pound meat-eating monster from his jugular vein.

Joe’s tiger situation highlights why evolution has created System 1: On average, running for the tree is a life-saving decision when faced with a high-stress situation where survival is on the line. The issue with System 1 is that heuristic-based mechanisms often lead to systematic bias: Joe will almost always run, even when sometimes he should shoot. System 1 certainly served its purpose when humans were faced with life and death situations in the jungles, but in modern day life, where decisions in chaos have limited consequence,
the benefits of immediate decisions rarely outweigh the costs of flawed decision-making. The necessity of avoiding System 1 and relying on System 2 in the context of financial markets is of utmost importance.

Perception Is Not Reality

Ted Adelson, a vision scientist at MIT, has developed an illusion that highlights the fallibility of the human brain. This illusion is shown as Figure 1.

Stare at cells A and B in Figure 1. Do the colors of the squares look different? How confident are you that A is a different color than B? What odds would you accept in a bet? 5-1? 20-1? If you are a human, you should be confident that A and B are different. However, if you are an econ, your computer-like brain will identify a pixel in cell A and B, compare the red-green-blue values and identify that each is 120-120-120, a perfect match. Stare a little longer, but this time cut pieces of paper to create a small box around cells A and B. Now it should be clear: A and B are the same. The lesson here, and its applicability to decision making, is best described by Mark Twain, “It ain’t what you don’t know that gets you into trouble, it’s what you know for sure that simply ain’t so.” As investors, we need to be most wary of situations where “we know” something is bound to happen.

The Evidence Speaks: Models Beat Experts

The illusion in Figure 1 is simply meant to highlight that we can become overconfident based on first impressions. But how does a simple trick map into a broader claim that humans are irrational and thus poor discretionary decision-makers? For this endeavor, I stand on the shoulders of academic researchers who have spent their lives addressing this question.

Source: Research by Dano Leli and Susan Filskov.

Source: Research by Joel Greenblatt.

 

The automatic accounts earned a total return of 84.1%, besting the S&P 500 index’s 62.7% mark by over 20 percentage points. The self-managed accounts, in which clients were given the model’s outputs, but were allowed to pick and choose stocks at their discretion, earned a respectable 59.4%. However, the 59.4% figure was worse than the passive benchmark, and much worse than the account performance for those that simply “followed the model.” This evidence is similar to the study on brain impairment accuracy: models represent a ceiling on performance, not a floor.

Further Evidence That Systematic Beats Discretionary

Thus far, I’ve presented a formal study published in 1984 and a somewhat ad hoc study of investor behavior. In order to make a more convincing case that models beat experts, we require more analysis. Luckily, one doesn’t have to look that far. There is a sophisticated body of academic literature that has studied the performance of systematic and discretionary decision-making for over 50 years. The breadth and depth of studies are overwhelming, but fortunately, professors William Grove, David Zald, Boyd Lebow, Beth Snitz, and Chad Nelson have performed a meta-analysis (a study of studies) on 136 studies that analyze the accuracy of “actuarial” (i.e., computers/models) vs. “clinical” (i.e., human experts) judgment.

The studies examined by Grove et al in their 2000 Psychological Assessment article included forecast accuracy estimates for just about every category one can imagine. A few examples include college academic performance, magazine advertising sales, success in military training, diagnosis of appendicitis, business failure, suicide attempts, and so forth. Figure 4 summarizes the compiled results of Grove et al’s meta-analysis.

Wesley R. Gray Ph.D. , Ph.D., is the founder and executive managing member of Alpha Architect. He is also an assistant professor of finance at Drexel University’s LeBow College of Business.


Discussion

Paul H from CA posted 5 months ago:

Hi-

How complex are the investment models generated by experts? Are these models accessible to anyone? Do these models change with time? Is the shadow portfolio, for example, considered as one of these models?

Thank you for the article.


Ricardo Moran from FL posted 5 months ago:

Excellent article. I would also be very interested in the answer to Paul's third question: "Is the shadow portfolio, for example, considered as one of these models?" regarding both the shadow stock and the shadow mutual fund portfolios.
Thanks for the article,
R


Charles Rotblut from IL posted 5 months ago:

Paul,

The model can be created from any set of quantitative data. A basic stock screen that seeks out profitable companies with low valuations counts as a quantitative model. In the simplest of terms, a model is simply a method of identifying stocks that match or violate a set of pre-specified characteristics.

The key is to use the model as the basis for a disciplined approach to investing. Let the model determine what meets the buy and sell guidelines, which is the approach we follow with the Shadow Stock portfolio and our other portfolios. Then conduct due diligence to ensure there isn't any a negative characteristic not considered by the model that would alter your view.

-Charles


Paul Firgens from Wisconsin posted 5 months ago:

Dr. Gray wrote an excellent book, "Quantitative Value, A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors", which elaborates on his approach to finding a model. It will give you a sense of the complexity involved. Recommended!


Dave K from CA posted 4 months ago:

Charles,

Your advice about applying "due diligence" to the Shadow Stock screen/model seems somewhat contrary to the thrust of the article. Isn't due diligence a form of human/clinical thinking that is subject to emotions? If so, the evidence presented by Wesley Gray strongly suggests that the model altered by due diligence will most likely underperform the model itself.

If due diligence is more like applying additional fixed rules to the model, such "tinkering" is still subject to the danger identified in reason #4 above: Incorrect modifications to the model outnumber correct modifications.

I thank the experts at AAII for developing the Shadow Stock portfolio model. I'm not at all confident that I can improve upon it.


Charles Rotblut from IL posted 4 months ago:

Dave,

The only characteristics stocks screens consider is what they are instructed to filter for. Nothing else about a passing company is considered. This is why it is important to look beyond a screen results to ensure there is not a negative trait beyond the screen's parameters that would alter your opinion.

-Charles


David Phillips from AL posted 4 months ago:

Charles,

Please address the subject of backtesting a model to see if it produced the desired results. If not, modify the model and keep testing. What are the tools to accomplish this?

However, when does this become data mining or data fitting? On the other hand, "if a model won't hold up to rigorous backtesting why should one think it will hold up going forward", to quote professor Glover.

Thanks for such an interesting article.

David


Shane Milburn from TN posted 4 months ago:

Just wanted to post that I enjoyed this article. Good information and perspective. Much of my portfolio decisions are based on a variation on Joel Greenblatt's methods - but I do find it difficult to just buy highly rated companies at random. I realize I might be hurting performance, but I just can't stop myself from learning further about the companies - even realizing I might be hurting my results.

I also second the idea that learning more about a company can cause a false confidence, and conviction on a stock can easily become tied up in ego - and it's important to keep the ego out of it if possible.


Bert Krauss from CT posted 4 months ago:

Thank you for a very interesting article.

However I have two concerns with its conclusions.

1. Assuming the model is made by humans rather than an intelligent computer which can analyze data according to its own methods, why doesn't system 1 thinking affect the making of the model.

2. More significantly doesn't the use of the model assume that the environment it is working in is constant? If some future humans no longer have an appendix but still suffer abdominal pain for other reasons wouldn't the model still predict appendicitis?


Steven Stark from ID posted 4 months ago:

I e-mailed Joel Greenblatt's website about a stock they had listed as a recommended buy. I thought it shouldn't be included. They basically said follow the formula, forget due diligence, stay diversified and you'll be fine.
I also have difficulty following a system but I see the merits of doing so.


Paul Campbell from UT posted 4 months ago:

Terrific article. The only issue I have with value screening models is that the process calls for buying when a stock passes the screen, and selling when it does not. Nearly half of the stocks on a value screen one quarter are off the next quarter.

Turns an investor into a short term trader.

Comments and suggestions are welcomed.


Thomas H from VA posted 3 months ago:

The conclusions here seem correct but the data presentation is skewed. Models equal or beating experts 94% should be compared to experts equal or beat models which would be 54%. Or the more dramatic models beat experts 46%, experts beat model 6%. Yes the correct titles were used but were they searching for the readers who skim too fast?


You need to log in as a registered AAII user before commenting.
Create an account

Log In