Join our community of smart investors
Opinion

Risky tales of tail risk

Risky tales of tail risk
March 28, 2013
Risky tales of tail risk

One of these is motivated reasoning; our estimate of the probability of a crisis is shaped by our political views. Supporters of a united Europe, willing the euro to succeed, are likely to under-rate the chances of a crisis, while eurosceptics, willing vindication of their prior, are likely to over-rate it. The latter might explain why William Hill is offering the ungenerous (to me!) price of 11/4 against the euro not being in use by the end of 2016.

A related, but analytically distinct bias, is wishful thinking. It is, of course, trivial that investors who are long of equities attach lower probabilities to a crisis than those who are long of gold or bonds. But what's the direction of causation? Is it simply that optimism causes us to buy shares and pessimism causes us to buy bonds? Or could it be that, once we have bought an asset, our investment position shapes our opinion? An experiment by Guy Mayraz of the University of Melbourne suggests it is partly the latter. He randomly assigned subjects to two categories: "farmers", who gained from higher wheat prices, and "bakers", who lost from them. He then got both groups to look at charts of historic prices of wheat and predict future prices. He found that "farmers" predicted higher prices than "bakers". Even quite small vested interests then - the small sums people are paid in laboratory experiments - can affect our judgment.

A further bias is what Nassim Nicholas Taleb in The Black Swan calls the "narrative fallacy" - our preference for "compact stories over raw truths". This, he says, causes us to attach more credibility to stories with neat chains of causation, thus underweighting the likelihood of random, unforeseeable events. So, for example, eurosceptics can tell stories about possible bank runs and prolonged austerity provoking political backlashes, whilst pro-Europeans can tell us about the power of outright monetary transactions to shore up the euro. Such stories underweight the fact that time and chance happeneth to us all.

Yet another problem is the availability heuristic - our tendency to overestimate the probability of events that are vivid in our minds, available to our memory. Just as we are apt to overestimate the likelihood of being murdered or winning the lottery because we've seen such things on the news, so we might overestimate the chance of a renewed euro crisis simply because we've seen one recently.

There is, however, a corollary to this. Just as we overestimate the chances of events that loom large in our minds, so we underestimate the chances of events that don't. This is one reason why almost everyone was surprised by the crisis of 2008. Because a financial crisis triggering a deep recession hadn't happened since the 1930s, it was merely a faint historical memory and thus was underestimated.

This bias in itself can generate financial 'cycles'. The late Hyman Minsky described how long periods of stability would prove self-defeating because they eventually cause investors to forget that crises can happen, and this leads to more and more speculative behaviour and hence a crisis.

A new paper by Armin Haas, Mathias Onischka and Markus Fucik, three German economists, points out that this apparently irrational behaviour was actually formally embedded in banks' risk models before the crisis. Banks measured risk by looking at the volatility of past prices. But because their data over-sampled good times and under-sampled bad ones, this led them to underestimate the likelihood of disaster. Measuring risk by looking at past events, they say, is like steering a ship by looking only at its wake.

It's possible that we are repeating this mistake. In imagining that the next crisis will resemble ones we've seen, such as the euro crisis, we might well be underestimating both the chances of a completely different crisis coming out of the blue and the chance of no crisis at all.

It seems, then, that it is almost impossible to judge accurately the chance of disaster. What can we do about this?

Professor Haas and colleagues advocate Bayesian risk management. Probabilities, they say, are subjective. Rather than rely upon the false comfort of spuriously precise probabilities gleaned from partial data, we should gradually revise our subjective probabilities as new information emerges. This doesn't necessarily mean using Bayes rule in its formal mathematical sense, but rather observing its spirit by looking at as much evidence as possible rather than simply that which appears to bolster our prejudices.

Sadly, however, it's very hard to be rational Bayesians. Ed Glaeser and Cass Sunstein, two Harvard researchers, show in a new paper that balanced or ambiguous new information often polarises opinion - causing optimists to become more optimistic and pessimists more pessimistic - which flatly contradicts Bayesianism. This, they say, could be because of a "memory boomerang" effect; news reminds us of the facts that formed our prior beliefs, thus strengthening those priors. Paul Krugman has an unkinder explanation, which he calls the Dunning-Kruger-Madoff effect. People who are stupid enough to be wrong are too stupid to realise they are wrong, and they compound this error by associating with like-minded people and trusting their similarly wrong-headed opinions.

Luckily, there's another possibility here. We don't need to know the precise probability of a euro crisis - if indeed, the idea of a 'precise probability' makes any sense at all. We just need to know the rough probabilities of share prices falling.

Xavier Gabaix at New York University's Stern School of Business points out that there's a good statistical rule of thumb for equity returns - that they are distributed as a cubic power law, so that extreme moves are more common than a normal distribution implies. To see how this works, consider weekly moves in the All-Share index since January 1988; this is a data set which Professor Gabaix did not consider, so it serves as a test of the cubic power law prediction. The average change has been 0.13 per cent, with a standard deviation of 2.23 percentage points. A normal distribution says that in these 1,315 weeks, we should have seen two falls of three standard deviations or more - worse than a 6.56 per cent drop. In fact, we've seen nine. And it says there's only a 4 per cent chance that we'd have had one four standard deviation fall - 8.79 per cent - or worse. In fact, we've had three falls.

However, a cubic power law predicts these frequencies almost perfectly.

We can use this distribution, then, to estimate the likelihood of a bad event. It tells us that there's a one-in-500 chance of the All-Share falling 10 per cent or more in any week. This might not sound much. But it means that, over a year, there's a 10 per cent chance that a week will see such a drop.

Now, this number isn't much help for professional risk-takers who are interested in time-varying risk, wanting to shift between 'risk-on' and 'risk-off' trades. And it's no use to those of you who want a nice plausible story to support your prejudice. But then, as someone said, it is better to be roughly right than precisely wrong.

The maths of the power law

The cubic power law is basically quite simple. To see how it works, start from a two standard deviation event. This should occur 22.75 times in 1,000; this comes from the normal distribution, which applies tolerably well for price changes up to two standard deviations. How less likely are bigger falls than two standard deviation ones?

Let's take a 10 per cent drop. First, we convert this into standard deviation units. A 10 per cent price fall is 10.13 percentage points away from the average week, and 10.13 divided by 2.23 is 4.5. So, it's a 4.5 standard deviation event.

We then halve this number, because we are only considering the bad half of the statistical distribution - the falls, not rises. We then raise it to the power three. This is the cubic bit; the power three is a number based not in theory, but in the observation of past data sets. This gives us 11.4.

We then divide this number into 22.75 - the two standard deviation probability. This gives us the probability of a 4.5 standard deviation fall. It is 1.996. That's one in 500.

We can do this for any large price move. A 20 per cent weekly fall is a nine standard deviation event. It should be 91 times less likely than a 22.75 in 1,000 chance, which is a 0.24 in 1,000 probability, or a one in 4,166 chance.