Improving on the Altman Z-Score, part 1: The CHS Model
Since its debut in 1968, the Altman Z-Score has been followed religiously by analysts worldwide due to its ease of calculation and relative accuracy at predicting bankruptcies (72% in its initial test). Here at Stockopedia we like to base our indicators on the original research papers that identified these tools, however it has been known for many years that the Z-score has its limitations.
Part of the reason for this is that it was originally developed for the analysis of operating industrial companies. It can therefore struggle with some negative working capital companies and financial companies, as discussed in our help item here. As a result we’re always on the lookout for any improvements that give the investor a better view on which companies to avoid and why (or indeed which companies to actively short and why).
In this mini-series of blog posts we’ll look at some of the main contenders for Z-Score replacement and review their use in the current market, you may even see some of them in a future screen. Contenders include the Ohlson O-score, Merton's 'distance-to-default' method and the one we'll discuss today, the as yet unnamed Campbell-Hilscher-Szilagy probability model (or CHS model for short).
CHS Model
The CHS model was developed by a Harvard team in 2010 with the aim of creating:
“a model of corporate failure in which accounting and market-based measures forecast the likelihood of future financial distress.”
The approach used is relatively simple and intuitive; the model scours the market for firms displaying recently made losses, high leverage, low and volatile recent returns, high levels of market to book and a low share price. While this may sound simple enough, through the use of more up to date statistical models and time based market variables the authors assert a substantial improvement over the aging Altman Z-score and the popular ‘Distance-to-Default’ model (see later blogs). They even calculate a 16% improvement on early 00’s research which had already boosted the forecasting power of the z-score through a streamlining of the variables. All this suggested to us that the CHS model is worth taking a closer look at so we’ll begin by examining the variables employed.
- Weighted Profitability Measure: This is the ratio between a company’s net income (losses) and the market value of their total assets. It is most likely to be negative for distressed firms and the formula gives greater influence for the results of the most recent quarters.
- WPM = Weighted average (Quarter’s net income/MTA) where MTA = Market value of total assets = Book value of liabilities + Market Cap
- Leverage Measure: Designed to capture the indebtedness of a company, distressed businesses are thought likely to be highly leveraged.
- LM = Total liabilities/MTA
- Short-term Liquidity: If a firm runs out of cash and cannot secure financing it will fail, hence short-term liquidity should be low.
- STL = Cash & Equivalents/MTA
- Weighted Recent Relative Performance: A measure of a share’s excess returns relative to its index, companies close to bankruptcy are expected to have negative returns. Again more weight is given to the most recent returns.
- WRRP = Weighted average (log(1+share’s return)-log(1+FTSE100 return))
- Recent volatility: A shares standard deviation over the previous three months, distressed firms are likely to display a high volatility.
- RV = Stddev(price)
- Relative size: An indication of the company’s size relative to its index. Smaller companies are thought to have harder access to temporary financing to prevent failure.
- RS = log(Stock market cap/FTSE100 total market value)
- Overvaluation factor: Aimed at capturing overvalued firms that have recently experienced heavy losses
- OF = MTA/Adjusted book value where Adjusted book value = Book value + 0.1*(Market Cap – Book value)
- Share price level: Distressed firms often have very low prices reflecting their equity value. The measure is capped at $15 as prices above this were not found to influence the result. A company with a share price of $20 therefore would still be calculated as log(15).
- SPL = log(recent share price)
These eight variables are well designed to cover the all the tell-tale signs of a company in trouble and potentially headed toward bankruptcy. Each one however has a slightly different bearing on the model and as such scaling factors are used upon each before inserting them in the final probability model. The factors were found using a method called logistic regression; we won’t go into too much detail here but suffice to say that the factors are computed using data on thousands of companies between 1963 a 2008. The formula for finding the probability of a firm failing in the next 12 months for instance can be seen below to give you an idea of the result:
- Logit = -8.87 – 0.09*SPL + 0.07*OF – 0.005*RS + 1.55*RV – 7.88*WRRP – 2.27*STL + 1.6*LM- 20.12*WPM
- Probability of Failure = P = 1/(1+exp(logit))
The statistical function for the probability calculation is known as ‘survival analysis’ which is specifically tailored to modelling failures. It is better suited to the task than the statistical tools used by Altman in 1968 and improves the models forecasting ability. The result of the calculation is a number between 0 and 1 where 0-0.05 companies are considered safe and 0.9-1 companies as being at risk of failure.
So does it work?
Well, when they ran the model with a set of companies divided up into different probability percentiles for a year, included below is a table of the results for the top and bottom percentiles. Keep in mind that the excess return is how much you could have made (or lost) relative to putting your money in the parent index:
Portfolios |
Low risk companies (0<P<0.05) |
High risk companies (0.95<P<0.99) |
V. High risk companies (0.99<P<1) Special Offer: Invest like Buffett, Slater and Greenblatt. Click here for details » |
Excess return |
3.7% |
-9.4% |
-25.7% |
4-factor alpha |
1.0% |
-7.5% |
-24.2% |
It’s clear that the low risk companies provided a reasonable positive return however the alpha is probably too low to infer strong conclusions from (to those unfamiliar with the 4-factor alpha, think of it as being how much more or less a portfolio has returned relative to how much it should have returned for its inherent risk).The high risk companies on the other hand display extremely strong correlation between probability of failure and falling share prices with highly negative alphas. This indicates a good ability for the model to predict declining shares, particularly in the 0.99-1 range.
Before you run off to set up your own short-selling portfolio off the back of this research however, it is worth noting a couple of important points. Firstly the model performs best when applied to distressed firms in the extreme value and growth categories, this may limit the number of companies that qualify for the screen. Secondly its application to financial companies is uncertain as the problem of relatively opaque balance sheets in this sector still renders some of the variables as potentially unreliable. If you’re convinced that there’s another slump waiting around the corner however, then this indicator could make up an important tool for your portfolio.
From the Source
You can read the original paper here.
Disclaimer:
As per our Terms of Use, Stockopedia is a financial news & data site, discussion forum and content aggregator. Our site should be used for educational & informational purposes only. We do not provide investment advice, recommendations or views as to whether an investment or strategy is suited to the investment needs of a specific individual. You should make your own decisions and seek independent professional advice before doing so. The author may own shares in any companies discussed, all opinions are his/her own & are general/impersonal. Remember: Shares can go down as well as up. Past performance is not a guide to future performance & investors may not get back the amount invested.