Models fail in the quadrant where you have fat or unknown tails and complex payoffs. A lot of phenomena fall there, such as epidemics, environmental problems, general risk management, insurance, natural catastrophes. And there are phenomena in that quadrant that have very complex payoffs, like payoffs from innovation, errors in analysis of deviation, derivative payoffs.
The other part that caught my attention was the estimation of fat tail risk.
As most readers know, all the fundamental models of finance theory use Gaussian (normal) distributions. Trading markets do not have normal probability distributions. Eurointelligence had a bit of fun with the particularly wild ride of last October:
October 2008 was certainly a spectacular month in the stock markets….
Those of us who studied modern finance theory, however, were truly astonished by the sheer improbability of the events occurring in the stock markets during that fateful month. One of the basic assumptions used in almost all our finance models is that returns are normally distributed. These models are widely used to price derivatives and other complex financial products. What do these models tell us about the probabilities of the events that occurred in October?
The following table gives an answer. We selected the six largest daily percentage changes in the Dow Jones Industrial Average during October, and asked the question of how frequent these changes occur assuming that, as is commonly done in finance models, these events are normally distributed. The results are truly astonishing. There were two daily changes of more than 10% during the month. With a standard deviation of daily changes of 1.032% (computed over the period 1971-2008) movements of such a magnitude can occur only once every 73 to 603 trillion billion years. Since our universe, according to most physicists, exists a mere 20 billion years we, finance theorists, would have had to wait for another trillion universes before one such change could be observed. Yet it happened twice during the same month. A truly miraculous event. The other four changes during the same month of October have a somewhat higher frequency, but surely we did not expect these to happen in our lifetimes.
Now supposedly quants have developed some fixes to various pricing and risk management models to allow for tail risk (can quant readers in the audience please tell us about them in comments, as in how they work and how successful do you believe them to be? I assume it’s GARCH, but confirmation/elaboration/additions welcome).
Taleb casts doubts on these fixes:
Let us start with the inverse problem of rare events and proceed with a simple, nonmathematical argument. In August 2007, The Wall Street Journal published a statement by one financial economist, expressing his surprise that financial markets experienced a string of events that “would happen once in 10,000 years”. A portrait of the gentleman accompanying the article revealed that he was considerably younger than 10,000 years; it is therefore fair to assume that he was not drawing his inference from his own empirical experience (and not from history at large), but from some theoretical model that produces the risk of rare events, or what he perceived to be rare events.
Alas, the rarer the event, the more theory you need (since we don’t observe it). So the rarer the event, the worse its inverse problem. And theories are fragile (just think of Doctor Bernanke).
The tragedy is as follows. Suppose that you are deriving probabilities of future occurrences from the data, assuming (generously) that the past is representative of the future. Now, say that you estimate that an event happens every 1,000 days. You will need a lot more data than 1,000 days to ascertain its frequency, say 3,000 days. Now, what if the event happens once every 5,000 days? The estimation of this probability requires some larger number, 15,000 or more. The smaller the probability, the more observations you need, and the greater the estimation error for a set number of observations. Therefore, to estimate a rare event you need a sample that is larger and larger in inverse proportion to the occurrence of the event.
If small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself, then: our empirical knowledge about the potential contribution—or role—of rare events (probability × consequence) is inversely proportional to their impact. This is why we should worry in the fourth quadrant!
The issue is that when you do find one of these outliers, and you are working in a region where those extreme events are big enough to worry about, like days when the markets are really roiled, you wind up having so few of the super extreme events that one can wind up distorting how you estimate the significance of tails (Taleb goes through this in geekier form in his technical appendix).
Taleb gathered every kind of market and macroeconomic data item he could locate (stock prices in various markets, commodities, interest rates, currencies, inflation, etc) where he could have a reasonably long time series. For the ones where he had 40 years. single events would take up most of the estimate of the tail risk. For instance, the 1987 crash is (from memory) 78% of the estimate of the tail risk for the S&P 500. For silver, it was even worse, nearly 90% (click to enlarge):
The text of the article is here.
Taleb was relaxed and funny at points during his talk, and seemed to enjoy chatting with the audience afterwards. I suspect his prickly streak comes to the fore when dealing with types he calls “charlatans”, particularly when they know enough to know better.
Originally published at Naked Capitalism and reproduced here with the author’s permission.