The Government’s Macroeconomic Series: X-Files, Dilbert, or Resource Constraints?

Or, is the model for explaining why macro data sometimes appear so counter to intuition best explained by willful deception (Iraq and WMDs), incompetence (the FEMA response to Katrina), or prosaic (resource constraints)? The casual reader might think I’m overstating the extreme hypotheses, but there is, after all, a whole website devoted to the proposition of conspiracy:

Have you ever wondered why the CPI, GDP and employment numbers run counter to your personal and business experiences? The problem lies in biased and often-manipulated government reporting.

Here’s John Williams’ exhibit 1 in the case against the government:

xfiles2.gif

Figure 1, drawn from John Williams’ “Shadow Government Statistics” website.

The other extreme view holds that incompetence is the problem. (This blogger does not use the word, but the adjective seems implicit in the discussion).

Personally, I think that many of these shortcomings in the data series reported by the government are a function of how well funded the agencies are, the priority accorded good data collection by the highest levels of the government (clearly, if no premium is placed upon the usefulness of information, then there will be little inclination to push for innovation in data collection), and the requisites of timeliness.

(Parenthetically, I won’t deny that sometimes, the level of management seems FEMA-esque, as in the recent attempt to digitize the next Census. But I am hopeful the next administration will put in better management.)

On the first point, I’ve already noted how funding has been declining for the various statistical agencies. [1].

How does this debate reflect upon the current debate over the economy’s state? At the time of the last recession, note that GDP was recording positive growth. It was only after some revisions did it become apparent that GDP growth had slipped into the negative region. And even later, subsequent estimates indicated that the growth rate had not gone negative for two consecutive quarters. I reprise the below graph to illustrate this point.

xfiles1.gif

Figure 2: Real GDP growth in Ch.1996$, q/q (in log terms) from May 2001 vintage (blue) and May 2006 (red). Source: BEA via FREDII and ALFRED, and author’s calculations.

To highlight the challenges faced in terms of generating timely estimates of GDP, consider the nature of the data used to calculate GDP. Figure 1, drawn from Chart 1 of Arnold Katz’s “An Overview of BEA’s Source Data and Estimating Methods for Quarterly GDP” (BEA, November 2006) [pdf] shows the type of data used to generate the advance and preliminary GDP releases. Note the large amounts of “trend-based†data used particularly in the advance release. Fully 55% is “trend based†and “trend based and monthly data†in the advance, dropping to 24.2% in the preliminary.

xfiles3.gif

Figure 3: drawn from Chart 1 of Arnold Katz’s “An Overview of BEA’s Source Data and Estimating Methods for Quarterly GDP” (BEA, November 2006) [pdf].

The final vintage exhibits substantially little change — in terms of the types of data — relative to the preliminary vintage. What one does see is that in the annual revision, usually released in July of each year, a lot of additional information is incorporated. So the time profile of GDP could change yet again as more information is released.

xfiles4.jpg

Figure 4: drawn from Chart 1 of Arnold Katz’s “An Overview of BEA’s Source Data and Estimating Methods for Quarterly GDP” (BEA, November 2006) [pdf].

As noted in this document:

The source data that BEA uses are collected from a variety of sources and, with few exceptions, for purposes other than the preparation of BEA’s estimates. Data collected by Federal Government agencies provide the backbone of the estimates; these data are supplemented by data from trade associations, businesses, international organizations, and other sources. The Government data are from a number of agencies, mainly the Commerce Department’s Bureau of the Census, the Labor Department’s Bureau of Labor Statistics, the Internal Revenue Service and other agencies of the Treasury Department, the Office of Management and Budget, and the Agriculture Department. Some of the Government-collected data, referred to as “administrative” data, are byproducts of government functions such as education programs, tax collection, defense, and regulation. Nonadministrative data, sometimes referred to as “general purpose” or “statistical” data, include the periodic economic and population censuses and a wide range of sample surveys, such as those that collect data on manufacturing, farm activity, and prices. Of the relatively few data items that BEA collects, most refer to international transactions. These include international trade in services and direct investment (both by foreign residents in the United States and by U.S. residents in foreign countries).

The source data available to BEA are not always ideal from the point of view of preparing the national economic accounts. BEA must develop estimating methods to transform the data. The estimating methods adjust the best available data to the concepts needed for the accounts, fill gaps in coverage of the source data, and make adjustments to the source data to obtain the needed time of recording and valuation.

I think this places into context some of the trade-offs that have to be made when trying to generate estimates of GDP (and its components) fairly quickly after the end of the relevant time period. Inspection of, for instance, the consumption block, highlights how much of the data are trend-like in the first couple of announcements:

xfiles5.gif

Table 1: drawn from Arnold Katz’s “An Overview of BEA’s Source Data and Estimating Methods for Quarterly GDP” (BEA, November 2006) [pdf].

In any case, I think these are points that need to be kept in mind as one takes the most recent GDP numbers, and makes flat out statements that we are — or are not — in a recession. (Egregious example here.)

Now what about the conspiracy view, that says the government is systematically fudging the numbers to minimize inflation. I think there are two questions to be addressed. The first is whether the older approach is “better” than the newer approach, which incorporates hedonics and allows for changes in basket weights. The second is whether the changes in methodology over time invalidates intertemporal comparisons of inflation rates. I think the case for an affirmative is somewhat stronger in the second than the first, although even in the second case, there are “fixes” to the problem.

As mentioned before, on the SGS website, and you’ll see assertions about how the government has manipulated the CPI over time, so that if one calculated inflation using CPIs calculated as they were pre-Boskin Commission, then one would obtain higher inflation rates. I don’t doubt this specific assertion, although without having all the spreadsheets, I can’t verify it (the issue of whether this is all intentional, well, WMD/Iraq does give one pause for thought, but I still am skeptical). But the fact that there might be different biases at different times in the data series doesn’t mean that ignoring quality changes in the past was a good thing. On the other hand, it’s hard to adjust to take out the quality changes in the past. But if one were concerned about this specific issue, one could use the experimental research CPI series calculated by BLS.

xfiles7.gif

Figure 5: 12-month inflation CPI-all urban (blue) and research series CPI-all urban (red), calculated using 12 month log-differencing. Source: BLS and author’s calculations.

On the first issue, however, it does seem to me that one does want to adjust for quality changes.

Now what is true is that the RS series does not hold fixed the weights — so the weights change in this series in 1987 and 1998. But the question that needs to be asked is whether one should hold weights fixed at, say, their 1982-84 levels, when trying to assess the inflation rate in 2007. I think the answer is no, and the reason goes to substitution effects. As relative prices change, consumers substitute away from the relatively expensive items. Holding fixed the weights overstates inflation. Of course, using the new expenditure shares as weights understates inflation. There is no really correct measure (unless one knows some very specific things about the consumer utility function), just as one can’t say that Paasche is “better” than Laspeyres.

So, if it’s not conspiracy, and it’s not incompetence, but it’s resource constraints, then what’s to be done? First, and easiest, we all have to be more careful consumers of data. In particular, one needs to understand the preliminary nature of the data. Second, more difficult, one needs to fund the collection and analysis of data more fully. Because there is no natural interest group constituency for data collection, I doubt this will occur easily. It certainly will not occur if those at the very top value dogma over data.

Parting shot: Looking at the latest data, note that while GDP growth (q/q, log terms) in 2008Q1 is greater than zero, it is perilously close to 0. And it is less than the corresponding value for 2001Q1 reported in May 2001, which eventually turned out to be negative. That’s not conspiracy — that’s an outcome of constraints — some unavoidable, some a function of funding choices.

xfiles6.gif

Figure 6: Real GDP growth in Ch.2000$, q/q at at annualized rate, calculated in log-differences. Source: BEA, GDP release of 26 June 2008.


Originally published at Econbrowser and reproduced here with the author’s permission.