Some of the GuruModels that we feature on the site are based on the experiences of practitioners who have taken the time to articulate their approach (e.g. Jim Slater or Ben Graham). But other strategies, usually those developed by academics, have been based on what is known as  “back-testing". This involves looking at historical data and simulating what would have happened if you’d used a particular technique in the past. It is usually done without factoring in trading costs or taxes, given the complexity of factoring in these investor-specific aspects. One example of a Model derived using backtesting, which many readers will know we are huge fans of, is the Piotroski F-Score.

Looking at the past like this is a powerful technique. However, it's very important to be mindful of the pitfalls of investment simulations and "naive backtesting". It is possible for many strategies to look great in backtests but most actually disappoint upon implementation. Simulations are usually always based on a 95% confidence interval, but in reality, investors are disappointed far more than 5% of the time (this has not been the case with Piotroski though!). To understand why, it's important to be mindful of the following critical issues with backtested results. 

How good is the underlying database? 

First up, it's important to consider the calibre of the database(s) on which the back-testing is performed. As John Freeman notes in his excellent article on the subject, “Behind the Smoke and Mirrors: Gauging the Integrity of Investment Simulations”, data errors can manifest themselves in persistent and insidious ways. There are many ways to be fooled. The most important issues include:

i) Survivorship bias

This is the tendency for failed companies to be excluded from performance studies because they have gone bust or otherwise disappeared (e.g. a takeover). For example, a mutual fund company's selection of funds today will include only those that are successful now. Many losing funds are closed and merged into other funds to hide poor performance. In theory, 90% of funds could truthfully claim to have performance in the first quartile of their peers if the peer group includes funds that have closed.

ii) Look ahead bias

This is the bias created by the use of new or revised data not available at the time of the historical trading decision. An example of this would be where a trade is simulated based…

Unlock the rest of this article with a 14 day trial

Already have an account?
Login here