Forecast bias of government agencies.

AuthorKrol, Robert
PositionReport

Forecasts of future economic activity underlie any budget revenue projection. However, the forecasters in a government agency may face incentives or pressures that introduce forecast bias. For example, agency forecasters may be rewarded for a rosy growth forecast that allows politicians to avoid politically costly program cuts or tax increases. Similarly they may be penalized for underforecasting economic growth. Where a reward system is asymmetric, it would make sense to observe biased forecasts.

This article evaluates real GDP forecasts of the Congressional Budget Office and the Office of Management and Budget. As a basis for comparison, the Blue Chip Consensus forecast is also evaluated. Tests in previous work assumed the forecast loss function was symmetric. This implies the political costs of a high or low GDP forecast are equal, so forecasts should be unbiased.

This article differs from previous work by conducting tests assuming the forecast loss function may not symmetric. Public choice models of political decisionmaking suggest government agencies such as the CBO and OMB face pressures that are likely to result in systematically biased forecasts. In this article, a flexible loss function allows for estimation of a parameter that captures the degree and direction of any forecast asymmetry. Elliott, Komunjer, and Timmermann (2005, 2008) show that failing to account for loss function asymmetry negatively affects tests that evaluate forecast accuracy and efficiency in the use of information available to forecasters.

Robert Krol is Professor of Economies at California State University, Northridge. He thanks Shirley Svorny for helpful comments.

Evidence from the existing literature examining CBO and OMB forecast performance using the standard symmetric loss function is mixed. Some studies evaluate budget forecasts while others evaluate forecasts of economic activity, such as real GDP growth. Based on these efforts, three general conclusions can be drawn. First, short-run forecasts of GDP and revenues are generally unbiased while long-run forecasts of these variables have an upward bias. (1) Second, both short-and long-run forecasts of GDP and revenues usually fail tests of information use efficiency. Researchers find that forecasters do not use available information to improve their forecasts. (2) Third, despite what are likely to be different political pressures on different agencies, most of the studies find forecast biases to be similar across agencies. (3)

Using a flexible loss function to evaluate the CBO, OMB, and Blue Chip Consensus forecasts, I find significant evidence of asymmetry in the forecast loss functions. The CBO and the Blue Chip Consensus have a downward bias in their forecasts of real GDP growth two and five years out. The CBO forecast is consistent with the private sector consensus. The OMB forecast loss function is also asymmetric. However, the OMB bias is in the opposite direction. OMB forecasters overforecast real GDP growth at the two- and five-year horizons by 5 percent and 14 percent respectively. I argue that this finding is consistent with incentives facing the two agencies.

In addition, once the asymmetry of the forecast loss function is taken into account, the traditional finding that available information is not used in the forecasts is rejected in favor of the finding that government forecasters use available information efficiently. These results illustrate the importance of taking into account loss function asymmetries when evaluating the forecast performance of government agencies that are subjected to political pressures.

This article is organized in the following manner. The first and second sections discuss testing procedures under symmetric and flexible loss functions. The third and fourth sections report the results of the tests under alternative loss functions. The fifth section articulates why loss functions would be expected to differ among the agencies in question. The article ends with a brief conclusion.

Testing Forecast Accuracy with a Symmetric Loss Function

Underlying any forecast is a loss function. Standard forecast evaluations assume the forecast loss function to be quadratic and symmetric. A feature of this type of a loss function is that the optimal forecast is the conditional expectation, with the implication that forecasts are unbiased (Elliott, Komunjer, and Timmermann, 2005, 2008). I conduct a standard test of forecast performance by regressing the actual growth in real GDP overj periods (log [Y.sub.t+j] - log [Y.sub.t]) on the predicted growth in real GDP over j periods (log [[??].sub.t+j] - log [Y.sub.t]): (1) log [Y.sub.t+j] - log [Y.sub.t] = [alpha] + [beta](log [[??].sub.t+j] - log [Y.sub.t]) + [[epsilon].sub.t], where log [Y.sub.t+j] and log [[??].sub.t+j] are the logarithm of real GDP and predicted real GDP in period t+j respectively, [alpha] and [beta] are parameters to be estimated, and [[epsilon].sub.t] is the error term, which should be uncorrelated for horizons beyond j-1. (4) Under the unbiased forecast hypothesis, I test the joint null hypothesis that the parameter estimates are [alpha] = 0 and [beta] = 1. Rejecting the null hypothesis implies the forecasts are biased.

The second standard test examines if forecasters use available information efficiently. Past information about the economy should be uneorrelated with forecast errors. For this test, the forecast error ([[mu].sub.t]) is regressed on information, such as past forecast errors ([[mu].suyb.t-i]), available at the time of the forecast:

(2) [[mu].sub.t] = v + [[tau].sub.1] [[mu].sub.t+1] + [[tau].sub.2] [[mu].sub.t-2] + [[xi].sub.t].

where v, [[tau].sub.1] and [[tau].sub.2] are parameters to be estimated, [[xi].sub.t] is a white noise error term, and [[mu].sub.t-i] are past forecast errors. The joint null hypothesis tested in this case is v = [[tau].sub.1] = [[tau].sub.2] = 0. Rejecting the null hypothesis means past forecast errors could be used to reduce the current forecast error. If this is the case, researchers conclude that available information is not being used efficiently.

Testing Forecast Accuracy with an Asymmetric Loss Function

Elliott, Komunjer, and Timmennann (2005, 2008) develop a flexible loss function that provides an alternative method for evaluating forecasts. This approach allows the researcher to estimate a loss function parameter to determine the extent and direction of any asymmetry in the forecast loss function. As they show, ignoring asymmetry can bias forecast evaluation tests. Under certain conditions, a biased forecast can be...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT