The US subprime crisis: Was it really the Fed’s fault?

In the discussion about the origins of the US subprime crisis, the overhelming number of explanations contain one predominant element: According to most economists (be it academic or from the financial sector), the crisis it at least partly the US Fed’s fault. Since the US central bank has kept interest too low for too long a period, banks were forced to invest in more risky assets in order to at least get a return a little above the meagre return on treasuries, according to this interpretation. As a consequence, the banks extended loans to people who in fact where not able to pay back there mortgage. Had the Fed increased interest rates earlier, returns on Treasury bonds would have increased and the bank’s would not have been forced into risky lendings.

While this reasoning sounds quite plausible at first, a closer inspection reveals some serious shortcomings. First, from a theoretical point of view, it is hard to really construct a mechanism how low interest rates translate into more risky lending. In the theory of finance, we usually talk about the problem of moral hazard and adverse selection and a shift toward more risky lending if interest rates rise . The logic behind this is straightforward: If interest rates rise, an increasing number of solid investment project cannot meet the return requirements anymore. Risk-avers borrowers drop out of the loan market, while risk-seekers stay behind who gamble on loading off part of their potential losses to the lender while.

The claim that low interest rates have forced the banks to go into more risky business, in contrast, is very hard to explain microeconomically, especially as it has not only been the returns on assets that were low, but also the interest rates in the money and capital market for which the banks could borrow themselves.

The problem with the predominant explanation of the causes of the subprime mortgage crisis becomes even more evident if one compares this crisis with the explanations for the second-to-last bust and boom in American real estate, the savings and loan (S&L) crises of the 1980s. The standard textbook explanation is that during this period, very high interest rates induced banks to move into more risky lending (i.e. Mishkin 2007): In the early 1980s, the S&Ls had come under pressure because their refinancing interest rates increased due to the Fed’s interest rate increases and the deregulation of deposit interest rates in the US, while the return on their mortgages with long maturities remained basically unchanged. According to Mishkin, these higher refinancing costs cuppled with a recession that hurt many otherwise profitable investment projects, “increased dramatically […] the incentives of moral hazard”. Freed from restrictions by the deregulation of the time, S&Ls started to make more risky lendings in the hope to get higher returns on these new loans. A house price and land speculation boom followed before many of the mortgages turned non-performing. In the end, a large number of S&L had to be closed down, houses were foreclosed and the government had to pick up a bill of about $150 bn.

Given that an overly tight monetary policy is supposed to have been one of the causes for the S&Ls to move into more risky business, from a logical point of view, it is very difficult to argue consistantly that now it has been an overly lose monetary policy which had the same effect for banks.

But what about the housing bubble?, some readers might ask. Isn’t it true that a lose monetary policy causes bubbles, hence the Fed’s policy must be at the root of the housing bubble in the US? Again, while this is a very popular claim, empirical evidence for this claim is scant, to say the least. The housing boom and bust of the 1980s as well as the stock market boom and the following bust in 1987 can hardly be tracked to an overly lose monetary policy: If anything, monetary policy had been very tight in the 1980s with several industrialized countries running monetarist experiments to battle inflation.

Even for the arguably largest and most damaging boom and bust in US history, the stock market boom of the 1920s and the following crash in 1929, this claim is difficult to sustain. True, the Fed lowered the interest rate from 4.0 to 3.5 percent in 1927. However, at this time, prices were falling with an annual rate of about 2 percent. Thus, real interest rates still remained above 5 percent – not something you would usually call a lose monetary policy. Nevertheless, the stock market bubble inflated and finally burst with the well known consequences of the Great Depression. As James Galbraith puts it in his analysis of the Great Depression (1954, p. 187):

“We do not know why a great speculative orgy occured in 1928 and 1929. The long-accepted explanation that credit was easy and so people were impelled to borrow money to buy common stock on marign is obviously nonsense. […M]uch of the 1928 and 1929 speculation occured on money borrowed at interest rates which for years before, and in any period since, would have been considered exceptionally astringent. Money, by the ordinary tests, was tight in the late twenties.”

Finally, one can ask whether the Fed should not have tried to burst the latest housing market bubble earlier. Uli Fritsche has argued that a bubble can only be detected with hindsight. In the case of the US housing market, I would not support that claim. It has been clear for a long time that there was some excessive price increase.

However, it is another questionwhether the Fed could have done much about it. Once a speculative boom has gained traction and asset prices are increasing with rates strongly in excess of nominal GDP growth, marginally higher interest rates might not discourage much the speculation. If you are convinced that your house will gain 10 percent in value over the coming year, it might not make that much difference to you if your mortgage rate is 5 or 7 percent, especially if banks offer you a negative-amortization loan. In contrast, ordinary investment in corporate fixed assets might well react to an increase in the borrowing rate from 5 to 7 percent. Hence, an interest rate increase large enough to deflate the bubble would most likely have pushed the economy into a recession. It is very hard to argue that the Fed – or any central bank – has the mandate to accept that much collateral damage for popping a bubble.

Galbraith, J. K. (1954), The Great Crash 1929, Penguin Books.

Mishkin, F. (2007), The Economics of Money, Banking, and Financial Markets, Pearson.

This article has been co-posted at Eurozone Watch.

10 Responses to "The US subprime crisis: Was it really the Fed’s fault?"

  1. Anonymous   December 9, 2007 at 12:13 pm

    The blog ignores the Fed’s responsibility in supervising mortgage lending practices, another shortcoming they have widely been accused of.

  2. Guest   December 9, 2007 at 2:16 pm

    Yes, the fed was responsible. It lowered interest rates from 6% to 1% when consumer prices were going to 1.50 %. Thus, the money creation was going full tilt. Also, Greenspan encouraged people to go into ARMs in 2004. Also, A lot of economists knew there wa bubble as far as back 2003. See The Economist May 2003 cover story about it. Also, Dr. Dean Baker wrote a paper on it on 2003, and spoke against it. He was laughed at and scorned. Also, Schiller published a book about it.

  3. eparisi   December 9, 2007 at 2:46 pm

    One solution encountered in the literature on how regulators and monetary authorities can deal with asset inflation without unwelcome spillovers to the economy are asset-based reserve requirements. This counter-cyclical approach requires all financial institutions (i.e. not only banks) to hold reserves based on their asset classes as opposed to the present pro-cyclical risk-based capital requirements that become binding when asset quality deteriorates thus potentially exacerbating a credit crunch. I’m not sure about the feasibility or the drawbacks but it sounds like an interesting idea. (

  4. Octavio Richetta   December 9, 2007 at 5:08 pm

    Great post. Your views on the role of interest rates on the formation of asset bubbles were a significant contribution to my knowledge. However, when you say that the FED could not have prevented the latest housing bubble via higher interest rates, you seem to be assuming that this was the only tool at their disposal. IMO, This is not the case. The FED could have done more in terms of regulation/oversight (I am sure you are aware of all the reckless lending practices that went on/are still going on?. And the FED did nothing about it) I am not a strong advocate of market regulation but, if it is us, the people, the ones who must pick up the tab when the capitalists screw up; then, it would seem logical that we have some input on crafting the rules of the game.

  5. Nicolas   December 10, 2007 at 4:35 am

    Low rates caused the speculative bubbles and the FED closed their eyes as long as certain industries were making money meanwhile many jobs were being transferred abroad. The communities could derive more taxes from speculative valuations. All said, a recipe for disaster that now has unfolded.

  6. Anonymous   December 10, 2007 at 5:08 am

    Very good post. The true is that the house bubble took force when the interest rate began to rise. Once the bubble developed, to rise interest rate -as was done by de Bank of Japan in 1990- risks of provoking a recession and an acute price assets fall. Once this dynamic is running, the odds are in favor of a true deflation, as the case of Japan shows. To watch the difficulties of Japan during the past 17 years is the first we have to do.

  7. John Craig   December 10, 2007 at 3:37 pm

    Might I suggest that global financial imbalances were a part of the problem – because the result of this was huge capital inflows to the US for which productive uses had to be found. Those imbalances reflect the re-cycling of East Asia’s current account surpluses and the Yen carry trade. More comments on the relationship between fiscal imbalances and the credit crunch are in

  8. Guest   December 10, 2007 at 6:19 pm

    Moral Hazard arises if insurance against the risk of a bad outcome encourages more risky behavior. PIMCO’s Paul McCulley argues three central bank policy pre-commitments removed three major risks from the global markets over the past few years: 1) Starting in 1995, the PBOC pre-committed to absorbing dollar depreciation risk via a pegged exchange rate regime for the Renminbi. 2) Starting in February 2001, the BoJ pre-committed to absorbing Japanese short-term interest rate risk via its Zero Interest Rate Policy (ZIRP), reinforced by its Quantitative Easing (QE) policy. 3) Starting in August 2003, the Fed pre-committed to holding short rates accommodative for a “considerable period and after that to hiking only at a measured pace”. The result: a concerted global reflationary monetary policy that induced private sector agents to more risk seeking behaviour, i.e. moral hazard. a further discussion see:

  9. BJ   December 11, 2007 at 11:22 am

    Very nice post. From a micro perspective I don’t think it matters if interest rates are rising or falling when it comes to incentives and moral hazard. As you explained, in the 80s traditional ‘safe’ lending turned unprofitable because of Reg Q which forced banks to turn to higher-yielding activities to compensate for the returns that depositors and shareholders could get somewhere else. In today’s environment (or until a few months ago, anyway) traditional lending is also unprofitable because the interest margins are too low to compensate for shareholders’ return on equity. This incentivizes banks into higher-yielding trading and fee generation activities. Banks could finance their activities by piling up additional cheap debt instead of equity but that would itself lead to higher leverage and risk.

  10. Guest   December 16, 2007 at 9:25 am

    No it was people the fault of people like you