Saturday, June 11, 2011

How We Know What Isn't So: Chapter 3

Value investors believe the market is not perfectly rational. To understand why, an examination of human behaviour is required. In How We Know What Isn't So, which is recommended by a number of value investors and behavioural economists, Thomas Gilovich explores the fallibility of human reasoning. Only by understanding our flaws can we seek to improve on them, thereby ameliorating our decision-making processes.


Humans appear to clearly recognize that evidence is required before something is believed. However, where they appear to be led astray, it is because of confusion between necessary evidence and sufficient evidence. Though items of evidence are necessary to prove a conclusion, such items may not be sufficient, and this is where humans often go wrong.

For example, humans put too much emphasis on positive instances when two variables are being intuitively tested for correlation or causality. To illustrate, Gilovich discusses a common belief that parents who adopt are more likely to conceive. Humans believe this because (positive) instances where they've known parents who have adopted and conceived stick out in their minds. Without considering the numbers and probabilities of other possible outcomes (adoption without subsequent conception, conception without adoption etc.), humans will believe this evidence to be sufficient.

Humans are also discriminatory about the kind of evidence they seek. Depending on how a question is phrased or what a human already believes, he will seek out confirmatory evidence and discard evidence that opposes the existing conclusion. (This has been discussed several times before on this site, and is known as confirmation bias.)

In many cases, however, the data on which humans often make faulty assumptions is not even available. This leads to its own set of challenges, including a lack of calibration. For example, school admissions personnel cannot know whether they rejected or accepted the right students, for they do not get to see what the performance would have been of the students whom they rejected. Research suggests that those in fields where feedback is provided (e.g. the sports management industry, where managers of one team have the benefit of seeing rejected players succeed or fail on other teams) are better calibrated.

A lack of evidence in the cases where we don't know "what would have happened" can also lead to self-fulfilling prophecies. For example, a market participant may start a rumour that a bank is under-capitalized. This could lead to a chain reaction that buries the bank. As such, the rumour's starter could turn out to be right by virtue of his stature or publicity. Here, we don't have the ability to know what would have happened had the rumour not been started, and therefore this could lead us to believe in things that just aren't so.

No comments: