Join our community of smart investors

The emotion of investing

In the first of a new series joining the dots between some of the best-loved books on finance and markets, we scoop out the best of the best from three key writers on behavioural economics to boost your investing prowess (and dinner party conversation).
August 21, 2015

Behavioural economics has become increasingly fashionable and can tell us a lot about the way our brains get in the way of objective, logical and, in many cases, correct decisions. Here we look at three seminal books and see where their ideas meet.

 

Why we are more emotional than we think

We are all familiar with the idea of people telling themselves what they want to hear or feeling more prone to giving a positive response when we're not hungry, tired or emotional. We assume that it is possible to eliminate those biases and emotions from our problem solving when we really need to, but what if we can't? Daniel Kahneman, Richard Thaler, Cass Sunstein and Nassim Taleb have set out in the books we've covered in this feature to show us why our powers of decision-making and rational thought might not be as trustworthy as we think they are.

Kahneman, author of Thinking Fast and Slow, and his collaborator Amos Tversky use the concepts of heuristics to describe the biases and imperfect mental strategies we use to solve problems and make decisions and to explain the mistakes we make as a result. Heuristics - taken on board by Taleb, Thaler and Sunstein - deals with concepts such as educated guesses and intuitive responses. It is the practice of coming up with "an adequate, though often imperfect" answer, according to Kahneman, and could range from guessing the number of red Smarties in a jar of sweets to judging the validity of a statistical statement when you are not completely sure of the facts.

 

Why we are always lying to ourselves?

This matters because you might be using these imperfect heuristics a lot more than you think you are. Here are two examples of how, known as the issues of 'substitution' and 'confirmation'. First, substitution: think about the number of questions you have to answer and decisions you make every single day. It is rare that you find it impossible to come up with an answer to something, even though the range of data at your fingertips is almost always too limited to come up with really solid conclusions. So how do we answer many questions? By substituting them for easier ones.

Consider the question 'how much would you contribute to save an endangered species?' With so many competing environmental issues and many competing financial priorities for you, it is a hard question to answer. It is easier to answer 'how much emotion do I feel about dying dolphins?' and come up with a corresponding amount in pounds. A German experiment shows this in action: Students are asked two seemingly unrelated questions: 'How happy are you these days?' and 'How many dates have you had this month?' When asked the second question after the first there is no correlation between number of dates and happiness but when the questions are reversed, there is a very high correlation. Kahneman suggests this is because the happiness question is abstract and hard to answer. But for students who have just been primed with an answer to how happy they were with an element of their lives (their love lives), they substitute one emotional response to answer the other wider question.

For Kahneman this is related to a split between two parts of our brains, systems one and two. The impulsive 'system one' quickly and intuitively answers questions (eg two plus two). The more engaged 'system two' is called into action when answers are more complicated (57 x 69). The split makes us far more prone to bias and illogical thinking than you think. According to Taleb, author of The Black Swan, "most of our mistakes come from using system one when we are in fact thinking that we are using system two".

One key mistake is created through something called confirmation bias, which forms a key part of all three books we looked at.

Confirmation bias is about believing what we want to believe. We seek information that confirms things we already think, rather than looking for things that negate those things. For Taleb, confirmation bias and the way we seek proof from corroborating facts is another example of our catastrophic failure to predict events. Think of a turkey, he says. The turkey is fed every single day for 1,000 days, each day compounding its expectation of being fed tomorrow, until Thanksgiving (or Christmas to us), when it will be killed. The corroborating fact of being fed day after day wasn't evidence that it would be fed tomorrow. He says we confuse confirmation with evidence of larger rules: ie I had lunch with a convicted murderer today but he didn't murder anyone in front of me. This means there is no evidence of him being a killer, ergo he cannot be a murderer. Absence of evidence does not suggest a rule.

Nudge has taken the large group of ways we are instinctively and subconsciously biased, and used them to guide our decision-making in real life. The big idea behind Richard Thaler and Cass Sunstein's book is that behavioural finance can be used to help people make better choices. Both the Cameron and Obama governments have embraced the academics' work and their belief that better policy outcomes can be achieved at little or no extra cost by making clever appeals to human psychology.

David Cameron was so won over by this concept that he brought it right into the heart of government in 2010, setting up a 'Nudge unit' to find ways to subtly change behaviour by going with the status quo. Think of opt-in pensions for example, a success of Nudge theory - a dramatic change that enables us to remain inert. Cameron's Nudge unit claimed successes in areas ranging from Job Centre interviews to new letters to induce car tax payments (including photographs of people's cars) before it was spun off into a joint venture last year. Thaler and Sunstein describe this approach as "libertarian paternalism", arguing that it is very hard to make individuals act in a new way, even when their current habits are failing.

 

Can we really be that stupid?

But how fair is this? It's certainly true that all of these authors present a slightly depressing picture of the human brain, as one which is useless at incorporating unknown data and very good at telling us exactly what we want to hear or already know.

What if our in-built encyclopedia of experiences, knowledge and memories creates not a potential hazard when it comes to problem solving but actually helps us make faster judgments which are not flawed, but more accurate. German psychologist Gurd Gigerenzer - a prominent critic of Kahneman's ideas - uses the phrase 'adaptive toolbox' to talk about the way that businesses and individuals use heuristics to handle situations of uncertainty involving limited time, computational resources and information. The content of the toolbox is shaped by evolution, learning and culture for specific domains of inference and reasoning.

 

Why none of us know anything about anything

So Kahneman and Taleb think we can't trust ourselves to make logical decisions and Thaler and Susstein think we can be duped into doing what others say by tapping into our biases. But what if we can't trust the experts either? Taleb in particular paints a grim picture of our ability to model risk and predict events such as terrorism or financial crises on the basis of our in-built biases.

We have seen that we are liable to kid ourselves about what we do and do not know. Concepts such as confirmation bias make it easy to think we can generalise from events that have happened and predict the future. But that is flawed. Taleb and Kahneman say that to make the world legible, we feel the need to weave facts together into a narrative. Taleb calls this a 'narrative fallacy'.

"Flawed stories of the past shape our views of the world and our expectations for the future," says Kahneman. In this context, our obsession with risk modelling and statistical analysis all looks a bit useless, as it does not account for the unforeseen one-offs that define some of the biggest crises of our time. Taleb's key thesis is that in the last half century, economists, investors and even politicians have increasingly viewed the world through the prism of statistical analysis. By doing so, they have tried to make the future a more certain place, in which everything from voter intentions to profits can be predicted and forecast within a small margin of error. When the unforeseen occurs, the same group has a bad habit of making past events seem entirely predictable with hindsight.

So according to Kahneman and Taleb, experts are little better than lay people when it comes to predicting outlier events and may not be any better at making decisions than a random throw of the dice. For Kahneman (who calls these issues the illusions of validity and understanding), this became evident during his work with the Israeli army, when he was tasked with evaluating candidates for officer training. He came up with a test to evaluate soldiers, using a range of highly specific data, only to find that when it came to feeding back from officer training, his forecasts were little better than random guesses. The issue has also been proved in trading rooms, when finance professor Terry Odean studied the trading records of 10,000 brokerage accounts to monitor stock performance after a sale. The results were atrocious.

So there are illusions of validity and illusions of understanding at work here and acceptance of those has even permeated investment houses. In a recent presentation, fund manager Baillie Gifford's experts said: "Overconfidence is perhaps the most disastrous of all problems in judgment and yet the most prevalent in the world of business and finance.

"One important consequence of overconfidence seems to be our instinctive belief that the future is both largely predictable and controllable. Yet, the evidence is that the future is remarkably unpredictable. Things that have never happened before happen all the time. As investors, we have to cope with uncertainty, however uncomfortable. The secret, for investing as well as human evolution, is not to predict but to be prepared and be adaptable. Chance favours the prepared mind."

 

Why we're so terrible at losing

Perhaps the most celebrated element of Kahneman's work is in the area of 'prospect theory', for which he won a Nobel Prize in 2002. Prospect theory deals with decision making when taking economic risks - for example trading on the stock market - and says that our decisions are often based not on the potential value of losses and gains but on our emotions connected to winning and losing. In Kahneman's words "people attach values to gains and losses rather than to wealth".

For example, it seems logical that two different people would be equally glad to gain £100, but that, of course, is not true. A millionaire would not feel the same way about gaining £100 as someone earning minimum wage, or someone who had gained and then lost £500 the day before.

In numerous examples Kahneman shows that we are highly reluctant to sell - and will demand illogically high prices - for things we already own. In one experiment, half of the participants are given a mug and half given cash. Sellers (those with a mug) indicate the price they are prepared to sell at while buyers (those without) indicate a buying price. The average buying price was half of the average selling price, despite the fact the mug had no sentimental value to any sellers before the test.

Prospect theory also shows that we are far less willing to take a gamble framed as a potential loss rather than a potential gain, even when the choices involve exactly the same level of risk. It also looks at why we are often irrationally unwilling to sell things when the offered price is lower than the one we originally paid for it due to an emotional aversion to loss rather than mental calculation of an overall change in wealth.

Consider this example as an example of our attitude towards gains and losses: How would you feel about gambling on a 10 per cent chance to win £95 and a 90 per cent chance to lose £5? How about paying £5 to play a lottery where you have a 10 per cent chance of winning £100 and a 90 per cent chance of winning nothing? You probably feel quite differently about the two things - at least many people do - even though they are in fact the same option. This is because we feel very differently about losses to the way we feel about costs.

Prospect theory helps explain why we find it so hard to sell a stock below the price we paid for it and why we are more likely to sell winners and hold on to embarrassing losers, even when we could be headed for greater losses in the long term by holding on. Markets and investors do not behave rationally, despite all economists' protests.

The ideas put forward by Kahneman here are closely linked to Thaler's idea of mental accounting, in which we have different emotional connections to different mental pots of money and treat pots differently, even when our overall wealth remains the same. And attitudes and emotions in markets and investing is also a key theme in Nudge theory.

One of the most salient observations Nudge makes explicitly about investors is the predisposition we all have to follow crowds. The book cites the example of investment clubs that tended to perform poorly when the members were conformist and easily guided by the first person to speak. It also looks at the increasing equity exposure of investors in Vanguard funds in the run-up to the dot-com collapse and the low level of equity exposure investors had going into the recovery - a classic buy-high-sell-low blunder.

The flawed reaction of many investors to bull and bear markets is also a reflection of the fact that people tend to put more emphasis on events depending on how recent and extreme they are. The authors also highlight our susceptibility to over-optimism and loss aversion, which can cause investors to inadvertently take on too much risk and suffer from inertia when better options become available. The book also contains some fascinating examples of our difficulty in distinguishing between randomness and real patterns, which can contribute to us using events to confirm our existing theories even when they could lead us to better and more valuable insights.