Value investors believe the market is not perfectly rational. To understand why, an examination of human behaviour is required. In How We Know What Isn't So, which is recommended by a number of value investors and behavioural economists, Thomas Gilovich explores the fallibility of human reasoning. Only by understanding our flaws can we seek to improve on them, thereby ameliorating our decision-making processes. |
Some bias in decision-making is good. If every time we make a decision we had to re-evaluate all of the relevant data we had ever encountered, it would be extremely inefficient. For example, if we encounter an article claiming a swami has mastered levitation, it would be a mistake to throw out all of our previous biases that stem from our understanding of gravitational pull. But sometimes our biases influence our decisions more than they should, turning skepticism into closed-mindedness.
A number of studies are discussed which demonstrate how our biases cause us not to properly factor in evidence that disconfirms our existing point of view. Tradionally, it was believed that humans just forget disconfirming evidence. But Gilovich here argues that in many cases we remember disconfirming evidence quite well, but we place increased scrutiny on such evidence to the point where we explain it away.
A lengthy discussion of what factors determine whether we forget or just put little weight on disconfirming factors ensues. The full details of the relevant factors are too much to discuss here, but as an illustrative example, contrast the belief that it rains after one washes one's car with that of the sports gambler who believes he can win despite a history of losses. In the former belief, subjects are likely to simply forget the times it was sunny after one washed their car, as this did not invoke a sharp feeling, whereas they feel regret if it rains after a wash. In the latter belief, the sports gambler does feel the pain of a loss, so it is not that he forgets disconfirming evidence. However, he tends to analyze the causes of the losses and find reasons that explain away the losses (e.g. "If it weren't for that play/injury/call/mistake, I would have won") while the wins, however lucky, remain considered justified.
Gilovich also discusses scenarios where our biases can play particular havoc with our conclusions. When the conclusion is hard to define, we are susceptible to having our opinions succumb to plausible explanations that may have no attachment to the truth. For example, if the question is whether children who attend daycare have difficulty "adjusting", it is difficult to objectively test the conclusion, and therefore much credence appears to be given to plausible explanations that probably aren't true. But if the question is whether children who attend daycare underperform academically, it is much easier to test objectively and therefore biased explanations are less likely to persist.
No comments:
Post a Comment