Join our community of smart investors

Systematic errors

Systematic errors of judgment are not things committed only by other people. We are all prone to them – including me
January 24, 2019

Thanks to work inspired by the Nobel laureate Daniel Kahneman, we all know now that people are prone to countless cognitive biases – systematic errors of judgment that distort our thinking and often cost us money.

There is, however, something irritating about a lot of discussion of such biases. There’s a tendency, which I have sometimes shared, to speak about them in the same way that sanctimonious religious believers speak about sins – as if they were something only committed by other people. Perhaps the most egregious example of this was David Cameron’s establishment of the Behavioural Insights Team in 2010. He used this to uncover ways to nudge people into making better decisions. He might have been better advised to have asked it to police his own cognitive biases – not least of which was the overconfidence (which Kahneman himself has called the most damaging bias) which led him to think he could win a referendum on Brexit.

As an attempt to counter this bad habit, I thought I’d consider my own biases – ones that have cost me money in the past, and might still be doing so.

One has been a misperception of correlation. If we look at short-term returns – monthly or even annually – we see high correlations across international stock markets. I inferred from this that international diversification was a poor way of spreading risk. What I neglected was that high correlations are consistent with both longer-term differences in returns. Since 2006, for example, the All-Share index has underperformed MSCI’s world index by 75 percentage points in sterling terms.

A second mistake has been ego-involvement. When the market fell last autumn I was pleased despite being worse off – I still had a quarter of my wealth in equities – because the drop vindicated my decision to cut my equity exposure in May. This is irrational, not least because a sense of being right can lead you to become overconfident.

You might think this shouldn’t be the case. Investing is a learning experience and we all occasionally learn that we were right. But there’s an asymmetry here. When we’re right we infer that we’re clever, but when we’re wrong we use what Sir Karl Popper called "immunising strategies" to stop us inferring that we are stupid. One of mine is the belief that failures (for example of seasonal or momentum investing) are statistical noise rather than a signal that the strategies have stopped working. I might be right, but perhaps I hide behind this too much.

A third mistake I make is the false consensus effect – a tendency to overestimate the extent to which other people share my beliefs and knowledge. This certainly colours my writing: in an effort to write about things you might not know I can end up writing about what I don't know, either. And it distorts my attitude to the economic world. I am often surprised when markets move because of things that I thought were known long ago. For example, when people blamed last year’s fall in share prices in part upon China’s slowdown, my reaction was: shouldn’t that have been discounted months ago?

Another bias reinforces this error. It’s professional deformation – our tendency to have a distorted view of the world because of our professional training. Engineers, for example, are apt to overestimate the extent to which everything is an engineering problem, and lawyers over-estimate the importance of the law. In my case, my professional training as an economist gave me a strong presumption that the market discounts information quickly. It therefore has led me to underestimate the extent to which the aggregate market can be mispriced.

This training led me to assume for years that high risk meant high return, and vice versa. I now know this is often not the case. High-beta assets, for example, don’t outperform low-beta ones on average, in contradiction of the capital asset pricing model. Some types of risk – such as bankruptcy risk – don’t pay off. And for the aggregate market periods of high volatility do not lead to higher returns.

Yes, it is economic research that has taught me these lessons. But it is textbook economics that meant I needed to learn them because it filled my head with daft ideas in the first place.

I’m not sure I’m wholly cleansed of professional deformation. Maybe my antipathy to Brexit is caused by economists’ tendency to overweight the importance of economic growth and downplay non-material things such as a sense of control or community.

There’s another error I might be making. Once you learn about cognitive biases you start seeing them everywhere. This itself is a cognitive bias – the confirmation bias. This can cause me to overestimate the importance of such biases and to underestimate other things that can lead people astray, such as bad incentives. When, for example, investors pile into overpriced assets (such as mortgage derivatives in the mid-2000s or tech stocks in the late 1990s) is this because of a cognitive bias (herding)? Or is it because of incentives to do what others are doing such as the fact that managers are judged on relative performance causes them to emulate others in an effort to reduce benchmark risk?

In fact, bad incentives might explain the observation I began with – that we are quicker to see cognitive biases in others than ourselves. Those who claim to be experts have strong incentives not to reveal their own errors of judgment in public. This does not, however, mean such errors don’t exist.