Bad Statistics, Bad Risk Management and Government Bailouts

As I was reading a Kevin Dowd column from January 2007 Financial Engineering News yesterday I was reminded of the current crisis and much of the present GSE bailout debate. Overall, the contrast revealed that risk managers have, by and large, laid down subjective decision-making tools and strategies (i.e., prudence) and picked up ever more complex modeling strategies to justify their choices. Those more complex models estimate higher moments of performance (i.e., standard deviations and skewness rather than just means) and markets are developing information products to help.

Nonetheless, policymaking remains rooted in lower statistical moments (if any at all) and little knowledge that can be called prudence. Hence, when I say that prudence seems to be a thing of the past, I mean that both policymakers and risk managers today are hard-pressed to sell a strategy that decreases risk – that is, increases capital – without some type of seemingly solid evidence. As Dowd wrote, “Prudence is expensive on capital, which does not go down well in the boardroom. This puts risk managers in an awkward situation, but the real issue is whether senior management want to manage risks in the first place. Many firms are not serious about risk management. In such firms, a recommendation to increase capital challenges corporate policy, and a risk manager brave enough to make such a recommendation may risk his job.”

So what do risk managers do? They rely on statistical analysis to give evidence for their recommendations. But good risk managers know that simple statistical analysis does not predict the vast majority of scenarios that can pose significant losses to the firm – precisely those scenarios the risk manager is responsible for monitoring.

Dowd uses the case of simple portfolio correlations. MBA finance teaches that those correlations can be used to measure diversification and are therefore the root of loss expectations in value-at-risk (VaR) models. Accepted practice is therefore to generate some average expected correlations and then proceed with the VaR estimation. The problem is that “…we cannot assume that correlations move smoothly. One of the best established stylized facts about correlation estimates is that [correlations] are apt to polarize in a crisis and jump toward their most extreme possible values (i.e., plus or minus one).” When correlations jump to plus or minus one, portfolio diversification goes out the window and the entire portfolio is at risk.

Hence, quantitative types (rightly or wrongly) move beyond simple mean correlations and begin to estimate standard deviations and skewness properties of correlation distributions to predict the probability of correlations jumping to negative or positive one. Those are used to try to predict “regime shifts” or “multivariate extreme-value analysis, “which suggests picking a copula function from the family of extreme-value copulas. If all this sounds familiar, well, it is. That’s what the ratings agencies were doing when they put all their eggs in one modeling basket and – as a result – produced statistically unreliable ratings.

So what is a risk manager or policymaker to do? You can’t go back to the simple world of average correlations, but you can’t just assume total loss. On the other hand, you shouldn’t be fooled by statistics, either. The typical (accepted) approach is to estimate VaR on the basis of the normal average correlations, but then also look at what would happen in the crisis scenario by evaluating the extreme loss dynamics, because that is what you will most likely pay out.

Let’s apply this method to the GSE bailout. With normal correlations, the CBO estimates the bailout will cost the Treasury (i.e., taxpayers) zero, because no bailout is given. If a crisis occurs (or continues) the CBO estimates the program could cost well over $100 billion.

The $25 billion published in the press is, in statistical terms, an expected value. Expected values are obtained by multiplying the probability of a scenario by the payout in that scenario. For instance, in a coin flip where you get $1 if it is heads and pay $1 if it is tails, the expected value is ½*$1 + ½*-$1 = $0. But you never actually get a $0 outcome: you either get $1 or you pay $1.

That is the statistical fallacy of the $25 billion number: the payout will either be well in excess of $100 billion or zero. The $25 billion expected value does not actually occur. So when Jim DeMint was quoted in today’s Wall Street Journal as calling the plan a “huge gamble,” adding, “It’s kind of: guarantee a little now or pay a whole lot later,” he was playing into the statistical fallacy. Add to the expected payout the effects of crony capitalism-inflated moral hazard and the payout will probably be much larger than any CBO estimate considered. Think of this plan as just forbearance in the biggest thrift crisis failure ever, and remember what happened then as thrifts dug themselves deeper into insolvency by making more risky investments in their bids for resurrection.

In the words of Kevin Dowd, “Good statistics does not always lead to good risk management, and good risk management can be based on very bad statistics.”