Many market watchers and analysts are again looking at the well-known “First Five Days” indicator which was popularized by Yale Hirsch’s Stock Trader’s Almanac. (For the record, I think the Almanac contains a wealth of useful information. I keep one on my desk and gave two as Christmas gifts to friends and family. But back to our indicator of the moment …)
The “First Five Days” in January indicator holds loosely that the direction of the first five trading days of the year predicts the direction of the market for the remainder of the year. As proof of the indicator’s effectiveness, its proponents look back through the years to 1950 and state that of 45 “First Five Days” that finished up, the stock market finished up in 37 of those years – an impressive 82% win rate for the predictor (the win rate actually reached 89% just 12 years ago…).
The indicator has been cited in the last week by such venerable sources as The New York Times, U.S. News & World Report, CNN, and Money Magazine . . . however, there’s a problem. The simple incarnation presents a useless indicator, or worse, a danger to your wealth.
Well over a decade ago, I wrote in this same space that the “First Five Days” indicator is a bunch of bunk. I have been hearing a lot about it again in blogs, on CNBC, and even in the Wall Street Journal so I thought it would be useful for us to revisit the perennial myth.
There’s a well-reasoned dismantling of the “First Five Days” myth below but here’s one big blow against the indicator’s apparent cause and effect: since their inception, worldwide stock markets have been trending up. Why? Well not because of five up trading days at the start of the year! No, instead populations have grown, economies have expanded, and asset classes have appreciated – all of which give an upward bias to stock markets.
So, the first five days is not a useful indicator – it just conveniently fits into the equation of long-term rising markets. This is a known result and until the nature of the worldwide expansion changes, this will remain the case.
Now, on to the full dismantling of the “First Five Days” indicator – with an added upgrade that just could make it a bit useful.
Debunking the First Five Days
At the core of human nature, people desire to understand complex systems in simple terms. That’s a problem because we tend to apply simplistic cause and effect models to very intricate problems – and we expect similar “easy-to-understand” answers.
We like simple explanations so we are more than willing to believe cause-and-effect explanations that really make no logical sense. Groundhog Day exemplifies this phenomenon. Like financial markets, weather systems are complex and difficult to predict but humans have many simple ways to predict the weather – including an infamous groundhog in western Pennsylvania. If Punxsutawney Phil sees his shadow on February 2nd, then there will be six more weeks of winter weather. (In case you’re wondering, Phil has been correct about 39% of the time since he started predicting weather in 1887.)
Don’t Waste Your Time
Let me be blunt. The “First Five Days” indicator is the lowest form of analysis. It is the opposite of cause and effect. This type of analysis looks for any cause to tie to an end effect regardless of logic and as we shall see, regardless of statistical support. The indicator is no more valid or useful than predicting the stock market based on Super Bowl winners or groundhog shadows.
Here are three reasons why…
1. The logic is arbitrary. The raw numbers for this indicator show that the market has gone down during the first five days of January 31 times in the last 70 years. In those 31 occurrences, the market finished the year up 15 times and down 16 times. So, the indicator has no predictive value if it starts out to the downside. Looking at the same data, the indicator has “been right” 82% of the time when the market starts out to the upside. This is just more of the obvious logic that “markets go up more often than they go down”.
This approach is useless for me… if the data does not fit our hypothesis, then change the hypothesis to fit the data. This is a classic “curve fitting” mentality. Don’t risk any of your money based on that logic.
2. The triggering event itself is not statistically significant. For this indicator, all you need to trigger a yearlong market prediction is any up move for five days. This means that trivial moves would shape your outlook for the coming year. Suppose after five days the market was up only one quarter of a point. This would still trigger the indicator’s prediction for an up year.
What’s the problem with having a move of any magnitude trigger an indicator? A tiny move doesn’t tell us anything about what the market is doing; it’s random and just background “noise” of the market.
So how might we decide what is meaningful and what is just background noise? Many analysts use the average volatility of a price movement. Long-time readers know that I use the Average True Range (ATR) of price as a measure of volatility. (In simple terms, ATR measures the average size of the daily range – the high minus the low – while accounting for gaps between bars.) If we look at the ATR for a five-day move, we would want our trigger to move up or down at least half of the average. Anything less would be considered random.
With that in mind, your industrious writer dug deep into the raw data for the “First Five Days” indicator. I calculated the S&P 500 index’s ATR during the first five days for a 25-year chunk of time and checked to see how many of the “First Five Days” trigger signals could be considered more than random. The answer: Only 7!
As a useful follow-on about needing a significant move for the first days of the year, Ryan Detrick of LPL Financial provided the following interesting data. He looked for the number of times where the S&P is up more than 1.5% for the First Five Days in the last 70 years:
Detrick found just 21 times in the last 70 years when the S&P moved more than 1.5% in the first five days of the year! While the annual returns seem very strong for these particular years – once again, this is a small, cherry-picked data set. Again, here’s another reason to risk no money with this indicator.
3. Lastly – the sample population is just too small. When we eliminate the trigger signals that are mere noise, we now only have 13 to 16 triggers of the indicator over the last 55 years. This sample size is insignificant so we can have no statistical confidence in any predictions based on it. This reveals the indicator is just some simplistic curve fitting and it has zero meaning to traders and investors.
There are plenty of good analytical tools available to help guide your trading and investing decisions so throw out the overly simplistic and statistically meaningless ones like the “First Five Days” indicator.
One last note of caution – the indicator worked last year. Wait, it worked last year? Could that mean . . . it should work again this year? If you noticed yourself thinking something like that possibly, you are experiencing another psychological bias: recency bias. This tendency has us assign an excessive amount of meaning to the most recent data points. Don’t fall into this trap because it doesn’t help one bit the case for the “First Five Days” indicator.
Feel free to discuss the indicator for cocktail party purposes but don’t waste any money trying to use it to help you make sense of the markets in the coming year.
As always, I love to hear your thoughts. Send them to drbarton “at” vantharp.com
Great trading and God bless you,