Thoughts on Fooled By Randomness
"I believe that the principal asset I need to protect and cultivate is my deep-seated intellectual insecurity. My motto is "my principal activity is to tease those who take themselves and the quality of their knowledge too seriously."" --Nassim Taleb
I begin with a look at black swan dynamics.
A black swan is an outlier, an event that lies beyond the realm of normal expectations. Most people expect all swans to be white because that's what their experience tells them; a black swan is by definition a surprise. Nevertheless, people tend to concoct explanations for them after the fact, which makes them appear more predictable, and less random, than they are. Our minds are designed to retain, for efficient storage, past information that fits into a compressed narrative. This distortion, called the hindsight bias, prevents us from adequately learning from the past.
a)
So for starters, because we have a tendency to consider many elements of our past with more causal links, more narrative attributes, and less randomness than had actually existed at the time, we effectively are doing two things. One, we are tricking ourselves, because there are many circumstances in which the causal links we 'create' are simply false-- they aren't actually there. Therefore two, because there is a disconnect between perception and reality, we are misjudging what we know. In other words, we are misjudging the quality of our knowledge. This is one aspect of Taleb's statement.
b)
"Lucky fools do not bear the slightest suspicion that they may be lucky fools-- by definition, they do not know that they belong to such a category. They will act as if they deserved the money. Their strings of successes will inject them with so much serotonin (or some similar substance) that they will even fool themselves about their ability to outperform markets (our hormonal system does not know whether our successes depend on randomness)... Scientists found out that serotonin, a neurotransmitter, seems to command a large share of our human behavior. It sets a positive feedback, the virtuous circle, but, owing to an external kick from randomness, can start a reverse motion and cause a vicious circle. It has been shown that monkeys injected with serotonin will rise in the pecking order, which in turn causes an increase of the serotonin level in their blood... Randomness will be ruled out as a possible factor in their performance, until it rears its head once again and delivers the kick that will induce the downward spiral."
The point is that random events, good or bad, have a suboptimal effect on our behavior which we cannot control. Because our bodies have great difficulty differentiating between a good stimulus having been caused by our own ability or by randomness, we unconsciously pump ourselves up when there is no logical reason for it. Undeserved confidence in one's abilities is dangerous. Our perception of our ability to perform certain acts changes to incorporate the newfound confidence. Thus when we perform those acts the next time, we subconsciously increase the odds with which we will be believe we will be successful at what we are doing, when in fact the odds should remain the same. In essence, we have subconsciously increased our perceived self-aptitude at performing that task-- we have subconsciously (wrongly) increased what we believe our knowledge level is. This point is another angle at which Taleb discusses the 'quality of knowledge' in his motto, because again there is a disconnect between personal perception and reality. It also highlights the correlation between the seriousness with which we treat ourselves, and the quality of the knowledge we possess.
When making decisions which require us to make intelligent guesses at what will happen in the future, it might be helpful to keep a few things in find. The first have to do with the points mentioned above-- any source of information or any assumption we lever to make a decision should be vigorously tested for its quality and validity. We should only very reluctantly believe anything as being a certainty, simply because there are so many cards stacked against our making an optimal decision. We shouldn't overly rely on past information, because each event which happened in the past was one of countless other possibilities.
Counterintuitively, the most important test of whether or not our decision was indeed a valid one is not what ended up happening as a result, because that result could have been one of any number of other equal or more possible other results. The most important test is how intelligent that decision was in the face of all information that one had at that point in time-- the optimality of that decision is a function of its expected value, factoring in all other possible future states of nature, and its variance when compared to all other decisions one could have made at that point in time.
To shine some light on this, consider the game of russian roulette. Lets say that a man hands you a revolver with one bullet in it. The rules of the game are as follows-- if you fire the gun at yourself and do not die, he will give you a million dollars in cash. If you do die, well, sorry. Lets say the man accepts, fires the gun and gets lucky. He is left with a million dollars, but his decision was obviously not the optimal one. We know that because we know all the rules of the game; the risks, the payoffs, and the underlying probability distribution. [The thought process going into the decision is called the 'generator.'] The whole point is that in real life, we don't know all the rules of the game. We don't know all the risks, the payoffs, and the underlying probability distribution because there are so many factors involved that it would be impossible to incorporate all relevant variables into a rational, optimal decision. So when we look back on historical events, we see the end result (the man getting the million), but we almost never are able to see the generator. Without knowing all the other alternative histories, we must be very very careful when making a decision now because it worked some times in the past. We can, and often will, end up with the metaphorical bullet in our heads because of an over-reliance on past data and an underappreciation for the only thing which really matters-- the generator, the search for the rules of the game.
One final thought while on the subject of using past data to draw inferences on the future: survivorship bias. Survivorship bias is another reason why we must be very careful when drawing on past data. I'll go back to the example of the authors in the book "The Millionaire Next Door" citing that risk-taking is a common quality shared by many rich people. The authors then imply that risk taking is a good thing; it is something we may need if we want to become rich ourselves. The natural reaction when testing this claim is to get an estimate of how many people are risk takers, and how many of those risk takers are rich. We can then reach some conclusion about the probability that a risk taker will end up rich.
There is one flaw to this measure of probability though. What it doesn't take into account are all the people who were risk takers, but through their excessive use of risky policies, went bust and cowered away into obscurity. All the real losers won't be taken into account in our statistic, because only the winners have survived up until this point.
One can see that the proper way to run this test would be to find all the people, rich and poor, who are risk takers today, and see where those very same people are in 10 years time.
But we can't really do that with history or historical information. Do most risk takers end up rich? Or did all the risk takers who didn't make it go bust, so that there weren't many "losing" risk takers left by the time the test was done?
The whole point is to be very careful about what you consider to be true. Chance plays a larger part than we tend to think, and there are numerous biases we have and we are subject to which will cause us to be swayed from what is indeed true.
More to come on black swan dynamics, which are profoundly interesting.
I begin with a look at black swan dynamics.
A black swan is an outlier, an event that lies beyond the realm of normal expectations. Most people expect all swans to be white because that's what their experience tells them; a black swan is by definition a surprise. Nevertheless, people tend to concoct explanations for them after the fact, which makes them appear more predictable, and less random, than they are. Our minds are designed to retain, for efficient storage, past information that fits into a compressed narrative. This distortion, called the hindsight bias, prevents us from adequately learning from the past.
a)
So for starters, because we have a tendency to consider many elements of our past with more causal links, more narrative attributes, and less randomness than had actually existed at the time, we effectively are doing two things. One, we are tricking ourselves, because there are many circumstances in which the causal links we 'create' are simply false-- they aren't actually there. Therefore two, because there is a disconnect between perception and reality, we are misjudging what we know. In other words, we are misjudging the quality of our knowledge. This is one aspect of Taleb's statement.
b)
"Lucky fools do not bear the slightest suspicion that they may be lucky fools-- by definition, they do not know that they belong to such a category. They will act as if they deserved the money. Their strings of successes will inject them with so much serotonin (or some similar substance) that they will even fool themselves about their ability to outperform markets (our hormonal system does not know whether our successes depend on randomness)... Scientists found out that serotonin, a neurotransmitter, seems to command a large share of our human behavior. It sets a positive feedback, the virtuous circle, but, owing to an external kick from randomness, can start a reverse motion and cause a vicious circle. It has been shown that monkeys injected with serotonin will rise in the pecking order, which in turn causes an increase of the serotonin level in their blood... Randomness will be ruled out as a possible factor in their performance, until it rears its head once again and delivers the kick that will induce the downward spiral."
The point is that random events, good or bad, have a suboptimal effect on our behavior which we cannot control. Because our bodies have great difficulty differentiating between a good stimulus having been caused by our own ability or by randomness, we unconsciously pump ourselves up when there is no logical reason for it. Undeserved confidence in one's abilities is dangerous. Our perception of our ability to perform certain acts changes to incorporate the newfound confidence. Thus when we perform those acts the next time, we subconsciously increase the odds with which we will be believe we will be successful at what we are doing, when in fact the odds should remain the same. In essence, we have subconsciously increased our perceived self-aptitude at performing that task-- we have subconsciously (wrongly) increased what we believe our knowledge level is. This point is another angle at which Taleb discusses the 'quality of knowledge' in his motto, because again there is a disconnect between personal perception and reality. It also highlights the correlation between the seriousness with which we treat ourselves, and the quality of the knowledge we possess.
When making decisions which require us to make intelligent guesses at what will happen in the future, it might be helpful to keep a few things in find. The first have to do with the points mentioned above-- any source of information or any assumption we lever to make a decision should be vigorously tested for its quality and validity. We should only very reluctantly believe anything as being a certainty, simply because there are so many cards stacked against our making an optimal decision. We shouldn't overly rely on past information, because each event which happened in the past was one of countless other possibilities.
Counterintuitively, the most important test of whether or not our decision was indeed a valid one is not what ended up happening as a result, because that result could have been one of any number of other equal or more possible other results. The most important test is how intelligent that decision was in the face of all information that one had at that point in time-- the optimality of that decision is a function of its expected value, factoring in all other possible future states of nature, and its variance when compared to all other decisions one could have made at that point in time.
To shine some light on this, consider the game of russian roulette. Lets say that a man hands you a revolver with one bullet in it. The rules of the game are as follows-- if you fire the gun at yourself and do not die, he will give you a million dollars in cash. If you do die, well, sorry. Lets say the man accepts, fires the gun and gets lucky. He is left with a million dollars, but his decision was obviously not the optimal one. We know that because we know all the rules of the game; the risks, the payoffs, and the underlying probability distribution. [The thought process going into the decision is called the 'generator.'] The whole point is that in real life, we don't know all the rules of the game. We don't know all the risks, the payoffs, and the underlying probability distribution because there are so many factors involved that it would be impossible to incorporate all relevant variables into a rational, optimal decision. So when we look back on historical events, we see the end result (the man getting the million), but we almost never are able to see the generator. Without knowing all the other alternative histories, we must be very very careful when making a decision now because it worked some times in the past. We can, and often will, end up with the metaphorical bullet in our heads because of an over-reliance on past data and an underappreciation for the only thing which really matters-- the generator, the search for the rules of the game.
One final thought while on the subject of using past data to draw inferences on the future: survivorship bias. Survivorship bias is another reason why we must be very careful when drawing on past data. I'll go back to the example of the authors in the book "The Millionaire Next Door" citing that risk-taking is a common quality shared by many rich people. The authors then imply that risk taking is a good thing; it is something we may need if we want to become rich ourselves. The natural reaction when testing this claim is to get an estimate of how many people are risk takers, and how many of those risk takers are rich. We can then reach some conclusion about the probability that a risk taker will end up rich.
There is one flaw to this measure of probability though. What it doesn't take into account are all the people who were risk takers, but through their excessive use of risky policies, went bust and cowered away into obscurity. All the real losers won't be taken into account in our statistic, because only the winners have survived up until this point.
One can see that the proper way to run this test would be to find all the people, rich and poor, who are risk takers today, and see where those very same people are in 10 years time.
But we can't really do that with history or historical information. Do most risk takers end up rich? Or did all the risk takers who didn't make it go bust, so that there weren't many "losing" risk takers left by the time the test was done?
The whole point is to be very careful about what you consider to be true. Chance plays a larger part than we tend to think, and there are numerous biases we have and we are subject to which will cause us to be swayed from what is indeed true.
More to come on black swan dynamics, which are profoundly interesting.
0 Comments:
Post a Comment
<< Home