Most economic theories make an assumption that humans are rational, but this definition is a peculiarly drawn one as it assumes that we all operate on the basis of Bayes’ Theorem, an idea which is freely bandied about but which very few people can actually describe. There’s a good reason for this – it’s profoundly unintuitive, which makes you wonder if we do think the way the theorem suggests we ought to.

Unintuitive or not every investor should have a working appreciation of the ideas behind Bayesian reasoning, because it’s another one of those mental models we can use to assess the usefulness, or otherwise, of our ideas. And oddly enough, it may be even more fundamental than we think.

The original paper on Bayes Theorem was entitled An Essay towards Solving a Problem in the Doctrine of Chances and dates from 1763. It wasn't even published by the eponymous Reverend Baynes but by his friend Richard Price, who discovered it in his papers after his death. As we’d expect under Stigler's Law of Eponymy it’s quite likely that Bayes didn’t even originate Bayes’ Theorem:

“It seems to be a law of the sociology of science that no discovery or invention is named after its first discoverer”.

Anyway, this Bayes Theorem thing seems to be quite important because it keeps popping up in all sorts of different places. The question is, what is it? Well, here it is in the words of the late and possibly great Reverend Bayes’ friend:

“To find out a method by which we might judge concerning the probability that an event has to happen, in given circumstances, upon supposition that we know nothing concerning if but that, under the same circumstances, it has happened a certain number of times and failed a certain other number of times. He adds, that he soon perceived that it be very difficult to do this, provided some rule could be found according for which we ought to eliminate the chance that the probability for the happening of an event perfectly unknown, should lie between any two named degrees of probability, antecedently to any experiment made about it; and that it appeared to him that the rule must be to suppose the chance the same that it should lie between any two equidifferent degrees; which, if it were allowed, all the rest might be easily calculated in the common method of proceeding in the doctrine of chances.”

Got that? Hmm …

**Prediction**

Put simply Bayes Theorem tells us how to combine prior knowledge with actual observations to predict the future. And as that prior knowledge changes it tells us how to update our predictions. And, perhaps most importantly, it tells us how to do this using sparse data; it’s a rule which shows how to correctly infer the future from a tiny sample of real-world information and it’s instinctively appealing because this conforms to our personal experiences of the world. If we then go on to equate rational thought with Bayesian reasoning we have a baseline for judging whether people are “rational” or not.

This type of rationality is clearly not the same as our commonsense appreciation of the topic, but it does provide a scientific definition against which we can apply some analysis. The idea that the brain may have evolved to implement Bayes’ Theorem may, at first flush, seem a strange one, but it isn’t really: evolution will generally find optimal solutions to common problems and if Bayesian reasoning is the best way of ensuring survival of the species then you’d expect it to produce something that operates in like manner. Yet when Amos Tversky and Daniel Kahneman investigated this problem in 1974 they determined that we don’t appear to reason in a Bayesian fashion at all.

**Rats, Base-Rates**

Underlying this Bayesian failure is a problem that generally goes by the name of the base rate fallacy: we seem to neglect the underlying rate of events happening when we compute the probability of them happening again. We looked at this problem in Cardano's Gambit, where diagnoticians get their estimates of the probability of a women having breast cancer if she’s had a positive mammogram spectacularly wrong.

The problem these experts were having is failing to adjust their expectations for the underlying base rates: nearly 10% of women without breast cancer will get a false positive result – and there are far, far more women who don’t have breast cancer than those that do. By neglected the base rates the clinicians estimated that the chances of a women with a positive test having breast cancer at about 75% when the actual frequency is around 7.5%.

It’s no wonder that we’re not much good at predicting events based on statistics if this is the kind of error we make.

**Wise Crowds**

Yet when Thomas Griffiths and JoshuaTenenbaum investigated human approaches to reasoning in Optimal Predictions in Everyday Cognition they found that the results closely matched those expected from Bayesian reasoning: arguing that their examples were more closely attuned to real-time than the laboratory based experiments of their illustrious predecessors: suggesting that we are Bayesian at root.

However – and of course, there’s always an “however” – further analysis by Michael Mozer, Harold Pashler and Hadjar Homaei in Optimal Predictions in Everyday Cognition: The Wisdom of Individuals or Crowds argue that this is a facet of averaging over many people, each of whom are offering a few samples to add to the overall results. Basically the wisdom of crowds leads to something that looks like the correct, i.e. Bayesian, result but is actually not suggesting that individuals reason in a Bayesian fashion at all. Of course, this doesn’t mean that people don’t reason as Bayesians, but indicates that the evidence isn’t conclusive one way or another. This is economics, after all.

**Anthropic**

“People integrate prior knowledge and observed data in a way that is consistent with our Bayesian model, ruling out some simple heuristics for predicting the future”.

The main issue for Bayesian reasoning, and most other prediction models, is formulating a theory about the distribution of results. If you only have a couple of samples and you don’t know whether you’re sitting in the fat tail of the distribution or the middle you’re likely guessing. Griffith and Tenenbaum resolve this by relying on the Copernican anthropic principle which argues that:

“The location of your birth in space and time in the Universe is privileged (or special) only to the extent implied by the fact that you are an intelligent observer, that your location among intelligent observers is not special but rather picked at random”

**Random and Not**

This sheer randomness, coupled with some basic knowledge about the world (e.g. 78 year old men don’t live forever just because they haven’t died yet) allows the researchers to postulate an underlying Bayesian reasoning mechanism. There’s nothing conclusive here, but this follows on from work done by Gerd Gigerenzer – How to Improve Bayesian Reasoning Without Instruction: Frequency Formats – which argues that the Tversky and Kahneman results are a facet of presenting information in an unrepresentative form, using probability formats.

If correct, this is another example of brilliant laboratory experimentation obscuring equally brilliant human reasoning capability. We may or may not be Bayesian reasoners, but we’re not irrational either. Unpicking this problem ought to be the main goal of behavioral economics, because without the answer nothing else means very much at all.

If you really want to understand Bayes’ Theorem – and you probably ought to if you want to have a proper liberal arts investing education – the best description I’ve found is here: http://yudkowsky.net/rational/bayes

Disclaimer:

As per our Terms of Use, Stockopedia is a financial news & data site, discussion forum and content aggregator. Our site should be used for educational & informational purposes only. We do not provide investment advice, recommendations or views as to whether an investment or strategy is suited to the investment needs of a specific individual. You should make your own decisions and seek independent professional advice before doing so. The author may own shares in any companies discussed, all opinions are his/her own & are general/impersonal. Remember: Shares can go down as well as up. Past performance is not a guide to future performance & investors may not get back the amount invested.

Probability Ltd, formerly Probability Plc, is engaged in developing and operating gambling services for mobile phone users. Probability B2B division of the Company is plug and play hosted service. The Company has two brands, LadyLucks and RingRingjuegos. The subsidiaries of the Company include Probability Games Corporation Ltd, which is engaged in research, development and marketing services and Probability Games (Alderney) Ltd, which is engaged in gaming services. In July 2012, it acquired Playoo SA. In May 2014, Gtech SpA announced that its wholly owned subsidiary, GTECH UK Interactive Limited (GTECH UK), has completed the acquisition of mobile gaming solutions company Probability Plc. more »

## 2 Comments on this Article show/hide all

First possible objection: surely all attempts at predictive theory have to account for and overcome the implications of Chaos mathematics? Even in a fully understood system, small variations etc.

Second possible objection: Kurt Godel's Incompleteness theorems... Bayes no more 'fundamental' than any other set of mathematical ideas.

Third possible objection: the psychology of rationality implies a uniformity not found empirically in observed behaviour (Margolis and many others).

An interesting subject, because I've been boning up on Bayes' Theorem recently. My conclusion so far is that it's interesting, but it is going to be difficult to apply.

One problem is that Taleb seems to be doing his best to undermine the use of statistics in finance. His basic argument is that the presence of fat tails can lead to his famous "black swan events" which can swamp out a seemingly smooth process.

Another problem is the choice of a prior distribution. How do you choose it? You could a uniform distribution.

My conception is like this: suppose I have a "strategy", like buying low PBV stocks. I need a prior distribution. I "could" use a uniform one, but I think that would be a bad choice. After all, evidence suggests that the strategy works well, so I wouldn't expect the prior to be uniform. Just how I might assume the priors to be distributed seems very much an open question. And besides, why would I follow a strategy if I believed it doesn't work?

The next step (the way I see it, anyway - I may be completely off base) is that I then implement the strategy. I.e. I go out and buy some low PBV stocks, and record my successes and failures. I'll have to define success, and failure, of course. But that's the easy bit. As I succeed or fail, I can update the prior to obtain a revised distribution.

I can then ask questions to see how effective my strategy is.

For fun, I decided to see how good my track record was. I concluded, using Bayesian analysis and a uniform prior distribution, that I had about a 97% chance of beating the market more than 50% of the time. I suspect that it's at this point that people's heads begins to swim. It's "nice", because you can quantify things, but I suspect that a more naive question "does it seem to be working?" will serve one nearly as well.

So, long story short, the secrets of the investing universe are probably not tucked away somewhere, awaiting the light of Bayes to reveal them.

I must add the cavaet that I have only just begun with Bayesian analysis, and others might find my grasp of the subject to be laughable.