Taleb Presentation on the Fourth Quadrant

Nassim Nicholas Taleb gave a presentation in New York yesterday which hews closely to a recent piece of his, although his talk did include some additional interesting charts and anecdotes.The article is worthwhile, and worth your attention, but let me highlight the two things I found most interesting.First was his “fourth quadrant” construct. He sets up a 2 by 2 matrix. On one axis is phenomenal that are normally distributed versus ones that have fat tails or unknown tails or unknown characteristics. On the other axis is the simple versus payoff from events. Simple payoffs are yes/no (dead or alive, for instance). “How much” payoffs are complex.

Models fail in the quadrant where you have fat or unknown tails and complex payoffs. A lot of phenomena fall there, such as epidemics, environmental problems, general risk management, insurance, natural catastrophes. And there are phenomena in that quadrant that have very complex payoffs, like payoffs from innovation, errors in analysis of deviation, derivative payoffs.

The other part that caught my attention was the estimation of fat tail risk.

As most readers know, all the fundamental models of finance theory use Gaussian (normal) distributions. Trading markets do not have normal probability distributions. Eurointelligence had a bit of fun with the particularly wild ride of last October:

October 2008 was certainly a spectacular month in the stock markets….

Those of us who studied modern finance theory, however, were truly astonished by the sheer improbability of the events occurring in the stock markets during that fateful month. One of the basic assumptions used in almost all our finance models is that returns are normally distributed. These models are widely used to price derivatives and other complex financial products. What do these models tell us about the probabilities of the events that occurred in October?

The following table gives an answer. We selected the six largest daily percentage changes in the Dow Jones Industrial Average during October, and asked the question of how frequent these changes occur assuming that, as is commonly done in finance models, these events are normally distributed. The results are truly astonishing. There were two daily changes of more than 10% during the month. With a standard deviation of daily changes of 1.032% (computed over the period 1971-2008) movements of such a magnitude can occur only once every 73 to 603 trillion billion years. Since our universe, according to most physicists, exists a mere 20 billion years we, finance theorists, would have had to wait for another trillion universes before one such change could be observed. Yet it happened twice during the same month. A truly miraculous event. The other four changes during the same month of October have a somewhat higher frequency, but surely we did not expect these to happen in our lifetimes.

Now supposedly quants have developed some fixes to various pricing and risk management models to allow for tail risk (can quant readers in the audience please tell us about them in comments, as in how they work and how successful do you believe them to be? I assume it’s GARCH, but confirmation/elaboration/additions welcome).

Taleb casts doubts on these fixes:

Let us start with the inverse problem of rare events and proceed with a simple, nonmathematical argument. In August 2007, The Wall Street Journal published a statement by one financial economist, expressing his surprise that financial markets experienced a string of events that “would happen once in 10,000 years”. A portrait of the gentleman accompanying the article revealed that he was considerably younger than 10,000 years; it is therefore fair to assume that he was not drawing his inference from his own empirical experience (and not from history at large), but from some theoretical model that produces the risk of rare events, or what he perceived to be rare events.

Alas, the rarer the event, the more theory you need (since we don’t observe it). So the rarer the event, the worse its inverse problem. And theories are fragile (just think of Doctor Bernanke).

The tragedy is as follows. Suppose that you are deriving probabilities of future occurrences from the data, assuming (generously) that the past is representative of the future. Now, say that you estimate that an event happens every 1,000 days. You will need a lot more data than 1,000 days to ascertain its frequency, say 3,000 days. Now, what if the event happens once every 5,000 days? The estimation of this probability requires some larger number, 15,000 or more. The smaller the probability, the more observations you need, and the greater the estimation error for a set number of observations. Therefore, to estimate a rare event you need a sample that is larger and larger in inverse proportion to the occurrence of the event.

If small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself, then: our empirical knowledge about the potential contribution—or role—of rare events (probability × consequence) is inversely proportional to their impact. This is why we should worry in the fourth quadrant!

The issue is that when you do find one of these outliers, and you are working in a region where those extreme events are big enough to worry about, like days when the markets are really roiled, you wind up having so few of the super extreme events that one can wind up distorting how you estimate the significance of tails (Taleb goes through this in geekier form in his technical appendix).

Taleb gathered every kind of market and macroeconomic data item he could locate (stock prices in various markets, commodities, interest rates, currencies, inflation, etc) where he could have a reasonably long time series. For the ones where he had 40 years. single events would take up most of the estimate of the tail risk. For instance, the 1987 crash is (from memory) 78% of the estimate of the tail risk for the S&P 500. For silver, it was even worse, nearly 90% (click to enlarge):

Picture+4.png

The text of the article is here.

Taleb was relaxed and funny at points during his talk, and seemed to enjoy chatting with the audience afterwards. I suspect his prickly streak comes to the fore when dealing with types he calls “charlatans”, particularly when they know enough to know better.


Originally published at Naked Capitalism and reproduced here with the author’s permission.

7 Responses to "Taleb Presentation on the Fourth Quadrant"

  1. Guest   April 24, 2009 at 9:28 am

    I believe Mr Taleb is at minute 14.One to go.

  2. JRT   April 24, 2009 at 3:51 pm

    “Statistics… that’s what you use when you don’t know what you’re talking about.”Statistics are tools to describe large numbers of things. If our minds can completely comprehend all the individual things without statistics, it is better to deal directly and not use these imprecise descriptions/abstractions. By definition: statistical mathematics are designed to adapt(Nassim’s higher moments) to describe anything by varying the curve or “goodness” of fit.When a large population described with statistics doesn’t contain the observation the statistics claim, you’re on mentally shaky ground. E.g. market prices.Electrical engineers use statistical tools to design circuits and software. If I used the arguments the economists did my circuits wouldn’t work and I’d get fired. Economics only has to “sort of” work (perfect for lognormal statistics).In the 50’s economists tried to catch up with the golden age of physics by enlisting U of Chicago et al. to attempt to “quantify” the social science. Lognormal distributions were useful. Remember they had less compute horsepower and data than I have on the machine I’m typing this on. I’ve replicated Markowitz et al observations using much more data and timeframes and it shows the same thing…they’re “useful” BUT A LONG WAY FROM THE GOODNESS OF FIT the central limit theory helps in the physical world. Won’t get me a nobel or top paper published in 2009, it would in the 50’s.Professors in the know, know this. They don’t know how to extract themselves from the decades of teaching unscathed and without liabilities. Better to divert to behavioral finance and wait for a generation to die/fade away seems to be their plan.

    • FactsDontMatter   April 25, 2009 at 12:10 pm

      @JRT: I’m in agreement with you and not arguing, but I’m curious what you’re referring to in “statistical tools for designing circuits and software.” I’m an (older) electrical engineer myself, and I would refer to the tools that I’ve seen as mathematical models, but not necessarily statistical. These may be older methods, though, so I’m asking because I want to know. Genetic algorithms, maybe?

      • JRT   April 25, 2009 at 11:10 pm

        FactsDontMatter,I designed CMOS Integrated Circuits. As physical transistor dimensions have shrunk, designs needed to be less dependent on transistor matching (and redundant in the case of dense memories) to yield an economical product. Fortunately, lots of transistors (literally 100’s of millions) were increasingly cheap (virtually free) as the individual parameter tolerances became progressively crappier. They can match less than 50% across a 10mm die.

  3. Guest   April 25, 2009 at 4:37 pm

    It’s the curse of set theory. If you want to read what baloney it really is, read A. Garciadiego, BERTRAND RUSSELL AND THE ORIGINS OF THE SET THEORETIC ‘PARADOXES.’But watch out! Like Taleb, your head is probably stuffed with this same baloney!John Ryskampphilneo2001@yahoo.com

  4. Joao Garcia   April 26, 2009 at 4:24 am

    I have studied fractals in 1990 dring my M.Sc. IN Physics (Game of Life and Crumpled wires). I had began doing Cred Der in 2000. In Sept 2002 I priced distressed CDOs in the portfolio of monoliner FSA using Gaussian Copulas and I detected the problems of correlation in the asset class. I then, from Sept 2002 and Jun 2006, I worked in putting in place a system to do dynamic credit portfolio management exactly to manage this sort of portfolios.I then moved to the front office to build the strategies to profit from what was to come. I can not say the rest.The presentation I made in London in Dec 2006 is a proof of what I am talking about. See: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1342206 (slides 14, 21 and 22).The proposal to deal with securitization is in an article: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1274774.We began it in Feb 2007. A forthcoming book (The Art of Credit Derivatives) has a more detailed description of everything (see: http://www.amazon.com/Art-Credit-Derivatives-Demystifying-Black/dp/0470747358/ref=sr_1_1?ie=UTF8&s=books&qid=1238884470&sr=1-1)Thanks,Best regards,Joao Garcia (www.sergeandjoao.com)

  5. Guest   May 4, 2009 at 12:50 am

    Mr. Joao Garcia, thank you for the website http://www.sergeandjoao.com , very informative. And the first link, I like slide 20 under securitization on the power point.In response to the article October Black Swan could be more related to extremistan, although as noted, statistics has two faults the first it is that it dated on historical data, and second it can be skewed to support an opinion.The volatility was extremely high in january of 2007 (i remember some economists squeaking about it) singling quasi risk appetite, the first Black Swan in October 2008 wasnt a fear sell off, that would of been worse, just no one bought in that day, the second Black Swan was a sell-off. Thus having two in one month.