Testing of Efficient Markets
K. Tharp, Ph.D.
II, we did historical testing of an efficiency signal on
today’s S&P 500 data going back to the year 1980.
Basically we bought highly efficient stocks on the first of
the month and held them with a 25% trailing stop.
Once the portfolio had 25 stocks (i.e., with 1% risk we were
fully invested), it only bought more stocks when we were stopped out
of a loser. The net
result was a compounded annual ROI of 38%.
It took 674 trades and rejected (i.e., we were fully
invested) 701. 56.7% of
our trades made money and the average win was 3.87 times bigger than
the average loss. We
also spent 378 days in a winning trade versus 80 days in a losing
trade. And we spent nine
million four hundred dollars in trading costs, which amounted to 1%
going in and 1% going out, so you can’t say that low cost
influenced our results. The
Sharpe ratio of this system is nearly two and the System Quality
Number™ is close to Holy Grail range.
Also some of our early position sizes were ridiculous because
a split-adjusted stock like Microsoft might have been 10 cents at
However, remember that this was today’s
S&P 500 going back to 1980.
Tell me what the S&P 500 will be 25 years from now, and I
can probably just buy and hold those stocks that are NOT ON THE LIST
and produce tremendous results. Also
we were not buying stocks that were subsequently dropped from the
list. So how much of our
results are due to the survivorship bias?
Potentially, a lot of the gain could have been due to that.
Furthermore, we have not yet determined if we could get
similar results with any trend following entry, including a simple
180 -day channel breakout (i.e., the stock makes a new 180-day
that this research is based upon my feeling of confidence that if
you buy stocks that show efficient uptrends (i.e., they are fairly
straight lines going up) with a 25% trailing stop and risking 1%,
you will make nice profits under most market conditions.
I showed that in a 2001 issue of Market
Mastery and in the recent evaluation of our portfolio.
Except for two potential data errors 1) the survivorship bias
or worse yet, 2) the possibility of buying great stocks before they
are recognized as being the industry leaders, the last study might
have proved my point. After
all we made a compounded ROI of 38% while the S&P 500 during
that time only made a compounded return of 10-12%.
Is it possible to automate this form of trading?
How do you take something that’s fairly discretionary and
turn it into something that’s objective?
Right now, it’s my subjective judgment that determines
whether or not something is a good efficient stock.
And this question, by the way, is probably the most difficult
question for most trading systems.
If the method can be automated, what is the formula
that will totally define an efficient stock for us?
At this point we seem to have answered the first two
questions, but have we really?
3) How can we overcome some of the data problems that are
present in historical stock data?
What market conditions are favorable for this method
and what market conditions should be avoided?
Most of our testing included the entire secular bull market
from 1982 through 2000 and we didn’t lose money in any year until
2002 and 2003 (which perhaps suggests that getting a list of future
big stocks did influence the data).
What can we expect from this method long term?
I don’t expect to answer all of those
questions in these studies. But
if you begin to understand some of the problems involved in
backtesting, then I’ve
met my objective for writing these articles.
And I think I’ve already illustrated many of them.
We’ve shown great results through backtesting with a method
I’m already confident in. But
the problems still remain. And
there is always the possibility that some coding errors were
involved in some of our great results.
When I published the first study a couple of
weeks ago, I asked my readers for comments on what might be wrong
with it from your perspective. Thank
you for your answers which included 1)
the survivorship bias, 2) inflationary impact of the last 30 years,
and 3) the impact of trade size.
Trade size was considered major because as our equity got
huge, we were taking on huge positions that could have moved the
Initially, our sizes were pretty big because of
the split- adjusted S&P 500.
For example, if you buy a stock that was introduced at $40 per share, but because of splits is data adjusted to be $1 per
share at its entry, then you are going to be buying huge position at
the onset. However,
those positions are only huge because of the split/dividend
adjustment. Second, we
did gravitate toward huge positions as our equity grew, but many of
the S&P 500 stocks can take huge positions without moving the
market that much. In
addition, to compensate for some of that we took a 1% hit both going
in and going out. That’s
probably quite high for early positions and low for big positions at
the end of the study.
Probably the biggest adjustment that we’d
have to make to the data is the impact of withdrawals for taxes,
etc. However, you would have that problem with any data set.
most of you probably don’t realize is that I have made no position
sizing adjustment to really push the results.
With the kind of System Quality Numbers we’ve been getting
in this research, it would be pretty easy to make position sizing
adjustments that would give us triple digit returns.
But we haven’t done that.
Instead, we have used a simple 1% risk.
I could increase my starting equity to one million and then
do 0.1% risk. This would
allow me to take $4000 positions on 250 different stocks (so we
could be in as many as half of the S&P 500 stocks at any one
time). However, before I
make position sizing adjustments, I’m looking to find the
algorithms that I like.
All of this research, by the way, is being done
Professional software with the assistance of its
developer, Bob Spear. Thank
you, Bob. And while
I’m using the Professional version,
everything we’re doing can be done with the standard version.
Overcoming Bugs in Our
Between the time I published the first study
and this writing, we have found several errors in our coding.
So remember, you can get great results despite coding errors
and sometimes because of them. The
first coding error was in the way that a variable called TOTALCASH
(that was used to determine position sizing) was calculated.
That error was fixed. It
affected position sizing and reduced the total compounded return by
about 5% -- the rest of the variables remained the same.
But that illustrates how much a variable that affects
position sizing can affect the results.
Table 1 shows the major changes in the results
with correction in the TOTALCASH variable.
1: Impact of the
The second error was a bug in how we did the
ranking. However, that
error actually improved the results slightly and that problem is
fixed in all subsequent studies.
The third coding problem was in the way the smoothing
function was calculated. When
you calculate the difference in the price and find the standard
deviation (which is what we did for the smoothing function), it’s
going to give a very low standard deviation with low-priced stocks
(over high-priced stocks) and favor them in the ranking.
However, at the same time, the efficiency algorithm actually
moves away from low priced stocks (i.e., a stock that moves from
$0.25 to $0.75 is very unlikely to get an efficiency rating of 8).
Thus, while very few low-priced stocks will have an
efficiency rating above 8, when we have “close minus close”
involved in our smoothing ranking, we’ll tend favor those when
they do achieve an efficiency rating above 8.
Table 3 shows a sample of the first 25 stocks picked.
The average position size was 8000 shares, which suggests
that at $1000 risk (i.e., 1% of the initial $100,000) the average
risk was 0.125 and the average price of those shares was 4 times
that or 50 cents. Table
4 shows a sample of the first 25 stocks picked with “close divided
by close” used. The
average price of those shares, based upon the same calculations, was
about $2.82. As a
result, we started using “close divided by close” in our
Unfortunately, this change produced a very
unusual result. It kept
us in both winning and losing trades much longer.
And as a result, we had fewer trades and much poorer results.
The compounded return on investments dropped significantly to 14.58%.
Table 2 shows the impact of using “close
divided by close” in our smoothing function.
Notice that 1) win percentage doesn’t change much and 2)
the win/loss ratio actually goes up.
Thus, the only reason for the major decrease in the
performance is the dramatic increase of number of days in both winning and
losing trades increases dramatically.
I have no clue why we stayed in both
winning and losing trades so much longer and this aspect of the
study is not over (in my opinion) until I understand the reason.
It is particularly surprising to me that we stayed in
losing trades for almost a year (versus 80 days).
Why? We still
have the same 25% trailing stop.
However, this is typical of the kinds of issues that you get
into with back testing.
2: Impact of the Change in the Smoothing Adjustment
Total Cash Error
This data also shows why backtesting is not THE
answer – at least, not by itself.
I could have stopped testing when I found a compounded ROI of
38.65% with a method that I was already confident in.
Instead, this sort of testing is just the means to helping
you determine how and why your method works (or doesn’t work).
But why does the change in the smoothing
function dramatically alter the average days in both the winning and
losing trades? One
possibility could be that the trades generated by the two systems
were totally different as shown in Tables 3 and 4.
Minus Close Smoothing
Risk Amount =
Price of Stock=
Note that the first 25 trades were
all taken in a major BEAR market, but some of them still
turned into huge winners with the 25% trailing stop.
In Table 3, two stocks were held until the 1987
Table 4, three stocks were held that long.
4: Close Divided by Close Smoothing
Risk Amount =
Price of Stock=
I looked at the standard deviation of the days
in trades for both samples. For
the "close minus close" sample, the mean was 274.74 days and the
standard deviation was 311.72 days.
For the "close divided by close", the mean was 491.71 days and
the standard deviation was 492.7 days.
If you did a t-test comparing these two samples, you could
not reject the hypothesis that they were from the sample population.
For the next few articles, I plan to look at
1. What happens when we simply buy and hold a position
from each stock for the entire 25 years?
One of you actually did this study, but it did not correspond
to the same years so it is difficult for us to do a meaningful
2. We’ll also determine what happens when we allow
ourselves to take as many as 250 trades (i.e., half the S&P 500
database) at any one time with the two smoothing functions.
With 1% risk and a 25% trailing stop we are limited to 25
trades. With a 0.1% risk
and a 25% trailing stop, we are limited to 250 trades.
We’ll simply increase our starting equity to $1M so that
we’ll be investing the same amount ($4000) with each trade.
3. Is there a better algorithm to find what I’m looking
for? I’m not
convinced, given these results, that I’m really buying the stocks
I’d normally buy when looking at a chart.
One way would be to look at charts of the 100 trades from
both smoothing algorithms to determine how many of them look like
efficient stocks. This
will give us a good idea of whether or not we are looking at
efficient stocks. If any
of you would like to do that and save me some time, I’d appreciate
it. Please let us know
and we’ll send you the data. And
if there are a number of you, we’ll simply split them up.
4. We’ll also try both the 180 day channel breakout and
linear regression to pick our trades.
5. And, lastly, when I feel I have some of the answers
I’m looking for, we’ll move to the real S&P 500 database that we have.
Notice at this point I still have not yet done
the following: 1) Looked
at the effect of any trend following algorithm and compared it with
efficiency. 2) Looked at the data on a S&P 500 database that
added and subtracted stocks as the index did. 3) Made position
sizing adjustments to see what’s really possible with this sort of
trading. All of that is
still to come in subsequent articles and it looks like this series
might continue for some time.
About Van Tharp: Trading
coach, and author, Dr. Van K. Tharp is widely recognized for his
best-selling book Trade Your Way to Financial Freedom and
his outstanding Peak Performance Home Study program - a highly
regarded classic that is suitable for all levels of traders and
investors. You can learn more about Van Tharp at www.iitm.com.