Forecasting and Empirical Methods in Finance and Macroeconomics.

AuthorDiebold, Francis X.

Francis X. Diebold [*]

All economic agents forecast all the time, and forecasting figures especially prominently in financial and macroeconomic contexts. Central to finance, for example, is the idea of expected present value of earnings flows, and central to macroeconomics is the idea of expectations and their effects on investment and consumption decisions. Moreover predictive ideas in finance and macroeconomics are very much intertwined. For example, modern asset pricing models attribute excess returns and return predictability in part to macroeconomic factors such as recession risk.

In finance recently, there has been extensive inquiry into issues such as long-horizon mean reversion in asset returns, persistence in mutual fund performance, volatility and correlation forecasting with applications to financial risk management, and selection biases attributable to survival or data snooping. [1] In macroeconomics, we have seen the development and application of new coincident and leading indicators and tracking portfolios, diffusion indexes, regime-switching models (with potentially time-varying transition probabilities), and new breeds of macroeconomic models that demand new tools for estimation and forecasting.

The development and assessment of econometric methods for use in empirical finance and macroeconomics, with special emphasis on problems of prediction, is very important. That is the subject of my own research program, as well as of an NBER working group that Kenneth D. West and I lead. [2] Here I describe some aspects of that research, ranging from general issues of forecast construction and evaluation to specific topics such as financial asset return volatility and business cycles.

Forecast Construction and Evaluation in Finance and Macroeconomics

Motivated by advances in finance and macroeconomics, recent research has produced new forecasting methods and refined existing ones. [3] For example, prediction problems involving asymmetric loss functions arise routinely in many fields, including finance, as when nonlinear tax schedules have different effects on speculative profits and losses. [4] In recent work, I have developed methods for optimal prediction under general loss structures, characterized the optimal predictor, provided workable methods for computing it, and established tight links to new work on volatility forecastability, which I discuss later. [5]

In related work motivated by financial considerations, such as "convergence trades," and macroeconomic considerations, such as long-run stability of the "great ratios," Peter F. Christoffersen and I have considered the forecasting of co-integrated variables. We show that at long horizons nothing is lost by ignoring co-integration when forecasts are evaluated using standard multivariate forecast accuracy measures. [6] Ultimately, our results suggest not that co-integration is unimportant but that standard forecast accuracy measures are deficient because they fail to value the maintenance of co-integrating relationships among variables. We suggest alternative measures that explicitly do this.

Forecast accuracy is obviously important because forecasts are used to guide decisions. Accuracy is also important to those who produce forecasts, because reputations and fortunes rise and fall with their accuracy. Comparisons of forecast accuracy are also important more generally to economists, as they must discriminate among competing economic hypotheses. Predictive performance and model adequacy are inextricably linked: predictive failure implies model inadequacy.

The evaluation of forecast accuracy is particularly common in finance and macroeconomics. In finance, one often needs to assess the validity of claims that a certain model can predict returns relative to a benchmark, such as a martingale. This is a question of point forecasting, and much has been written about the evaluation and combination of point forecasts. [7] In particular, Roberto S. Mariano and I have developed formal methods for testing the null hypothesis: that there is no difference in the accuracy of two competing forecasts. [8] A wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, nor even symmetric), and forecast errors can be non-Gaussian, non-zero mean, serially correlated, and contemporaneously correlated. Subsequent research has extended our approach to account for parameter estimation uncertainty [9] and data snooping bias. [10]

...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT