Q4:2012 U.S. GDP Nowcast Update

Will the rebuilding in the Northeast in the wake of Hurricane Sandy’s destruction juice the economy’s modest growth trend? It’s too early to know, but the possibility can’t be dismissed. For now, it’s time to establish a baseline of nowcasts for Q4:2012 GDP. The official estimate is scheduled for release via the Bureau of Economic Analysis in late-February, and so there’s a long road ahead in terms of economic updates. The journey starts here: The current average of our five econometric-based nowcasts anticipates real annualized Q4 growth of 1.7%, or down slightly from Q3’s official 2.0% estimate.

It’s still early, of course, and so there’s limited data for developing a robust estimate for Q4. But as the numbers roll in, we’ll update the nowcasts and monitor the changes. The key issue to watch is how the nowcasts evolve—are they rising, falling, or holding steady? The trend changes will tell us quite a lot about what to expect for the government’s initial estimate of Q4 GDP that’s due to hit the streets next February.

For now, here’s where we stand in terms of The Capital Spectator’s econometric nowcasts, along with two estimates from other sources for perspective:

As new data is published, our nowcasts will adjust accordingly. We’ll track the changes through time for a richer read on how the economic outlook’s faring–changes that will be recorded on this chart:

Here’s a brief profile of how each of The Capital Spectator’s nowcasts are calculated:

4-Factor Nowcast. This estimate is based on a multiple regression of quarterly GDP in history relative to quarterly changes for four key economic indicators: real personal consumption expenditures, real personal income less government transfers, industrial production, and private non-farm payrolls. This model compares the data on a quarterly basis, looking for relationships with GDP within each quarter from the early 1970s to the present. The four independent variables are updated monthly and so the nowcast is revised as new data is published. In effect, this model is telling us what the data trends in the current quarter imply for the quarter’s GDP growth.

10-Factor Nowcast. This model also uses a multiple regression framework based on historical data from the early 1970s onward and updates the estimates as new numbers arrive. The methodology here is identical to the 4-factor model above except that it uses additional factors—10 in all. In addition to the data quartet in the 4-factor model, the 10-factor nowcast also incorporates the following series:

• ISM Manufacturing PMI Composite Index
• housing starts
• initial jobless claims
• the stock market (S&P 500)
• crude oil prices (spot price for West Texas Intermediate)
• the Treasury yield curve spread (10-year Note less 3-month T-bill)

ARIMA Nowcast. The econometric engine for this nowcast is known as an autoregressive integrated moving average. The technique here is using only real GDP’s history, dating from the early 1970s onward, for anticipating the current quarter’s change in the broad economy. As the most recent quarterly GDP number is revised, so too is the ARIMA nowcast, which is calculated in R software via Professor Rob Hyndman’s “forecast” package, which optimizes the prediction model based on the data set’s historical record.

ARIMA 4 Nowcast. This model is similar to the ARIMA technique above in terms of the econometric details, but with one key difference. Instead of using GDP’s historical record as the lone input, the ARIMA 4 model analyzes four historical data sets to estimate GDP: real personal consumption expenditures, real personal income less government transfers, industrial production, and private non-farm payrolls.

VAR Nowcast. The vector autoregression model also analyzes four economic series in search of interdependent relationships for estimating GDP. The four data sets in the 4-factor and ARIMA 4 models above are also used to generate the VAR-based GDP nowcasts. As new data for each of the four series is published, so too is the VAR nowcast. The basic idea here is to let the data specify the model’s parameters. The data sets are based on historical records from the early 1970s, using the “vars” package for R to crunch the numbers.

This post was originally published at The Capital Spectator and is reproduced here with permission.