We use cookies to improve site performance and enhance your user experience. If you'd like to disable cookies on this device, please see our cookie management page.
If you close this message or continue to use this site, you consent to our use of cookies on this devise in accordance with our cookie policy, unless you disable them.

Close

Rescuing modern portfolio theory

Thanks in part to banks' terrible risk management practices, it has become fashionable for some people to criticisemodernportfoliotheory (MPT). However, as a new paper by Brian Jacobsen of Wells Fargo Funds Management shows, many of these criticisms are misplaced; they are attacks not upon the theory itself, but upon bad uses of it.

Essentially, MPT is quite simple. It uses maths to answer the question: what combination of assets maximizes expected returns for a given level of risk?. Or, equivalently: what combination minimizes risk for a particular expected return? To answer this, we only need the assets' expected returns, standard deviations and correlations. These suffice to generate efficient portfolios.

And here's the thing. We should interpret these portfolios, says Mr Jacobsen, as being probability distributions.

Take, for example, two possible efficient portfolios. Portfolio A offers an expected return of 4 per cent with a standard deviation of 5 percentage points. Portfolio B has an expected return of 7 per cent and a standard deviation of 10 per cent.

We can then say that portfolio A gives us a 21 per cent chance of losing money while portfolio B gives us a 24 per cent chance. And portfolio A gives us a 2.5 per cent chance of a loss of 10 per cent or more, while portfolio B gives us a 7.5 per cent chance.

It's easy, therefore, to translate the outputs of MPT into other measures of risk, such as the probability of loss, or the probability of a loss of a particular size. Yes, it looks as if MPT judges portfolios by their standard deviation, but this is just an illusion.

You might object here that my calculations rest upon a particular assumption - that returns are normallydistributed.

True. But it is easy to relax this assumption to take account of tail risk - the fact that extreme returns are more likely than a normal distribution predicts.

Say we're interested in the probability of a 30 per cent loss. For portfolio A, this is a 6.8 standard deviation event. A normal distribution says this is a one-in-200bn chance. But we can apply a cubic power law instead - as this is a better description of equity and a lot of currency risk. This tells us that portfolio A has a one in 1,727 chance (to be overly precise) of losing 30 per cent, while portfolio B has a one in 278 chance.

It's easy, then, to reconcile MPT with non-Gaussian probability distributions. Another criticism of MPT - that it assumes a normal distribution - thus vanishes.

There's more. Thinking of efficient portfolios as probability distributions tackles another criticism of MPT - that it can't handle assets whose risk can't be measured by standard deviation at all.

In itself, this criticism is valid. Some assets - low-grade bonds, managed currencies, or writing out-of-the money put options - offer steady returns in normal times with a small chance of a huge loss. In many samples , the standard deviation of such assets looks low, because the wipe-out didn't happen. A naive use of MPT would therefore advise us to invest heavily in them. This would be silly. But this is no fault of MPT, any more than it is a fault of your lawnmower that it gives you a lousy haircut - you're using the wrong tool for the job.

But there's a solution. First, use MPT for those assets whose risk can be measured by standard deviations: equities, commodities and freely-floating currencies. Then compare the probabilities generated by efficient portfolios of these with those of (say) corporate bonds. So, for example, if your MPT portfolio has a 1 per cent chance of losing 20 per cent (say), do you want to add to it a bond offering a 1 per cent chance of a 50 per cent loss? The answer will depend upon the bond's expected return and the correlation between the two probabilities. It would be silly to assume the two probabilities are independent, because the events that cause an equity portfolio to fall - such as recession - would also increase the chances of a default. Whether you use the historic record of defaults and equity returns in recessions to estimate the joint probabilities, or just 'judgment' is a separate issue. The fact remains that using MPT alongside some thought can be helpful.

There's a further criticism of MPT that Mr Jacobsen disposes of - that it relies upon historic returns, volatilities and correlations, which might not be representative of the future.

But this needn't be the case, he says. We can doctor the historic record to make it more representative of future conditions. If, say, we want a portfolio that can cope in times of recession and high volatility, we can add recessionary periods to our sample and cut out boom times. Alternatively, we can estimate correlations by using factor models - such as stocks' sensitivities to moves in the general market or to macroeconomic conditions. Blindly using historic data samples is only one option, and it needn't be the best.

Perhaps, then, we should show MPT more respect. For an idea that's approaching its 60th birthday, it is in good shape. Yes, it's dangerous if used badly. But this is true of many powerful and useful tools.

CUBIC POWER LAWS: A NOTE

The gist of a cubic power law is straightforward. It says that while a bell curve is a decent enough description of small deviations from the average - up to around two standard deviations - it is a terrible description of larger deviations, as these are much more common than the bell curve predicts.

The cubic power law just quantifies this. Say we want to know the probability of a 6.8 standard deviation event - the 30 per cent loss in portfolio A. A normal distribution says this is so improbable that we can ignore it. A cubic power law says differently. It quantifies the probability relative to a two standard deviation event - a 2.275 per cent chance.

Quite simply, to estimate the chance of a 6.8 standard deviation event, divide 6.8 by two (because we're interested in only half of the probability distribution - the bad half) and then raise it to the power three; this is the cubic bit. This gives us 39.3. Then divide this into 2.275 per cent. We get 0.058 per cent. Which is a 1 in 1,727 chance.

visible-status-Public story-url-mptrescue.xml

By Chris Dillow,
29 March 2010

Print this article

Register today and get...

Register today and get...
Please note terms & conditions apply