Adjusting for information content when comparing forecast performance

Published date01 November 2017
DOIhttp://doi.org/10.1002/for.2470
AuthorAndré Reslow,Michael K Andersson,Ted Aranki
Date01 November 2017
Received: 30 November 2016 Accepted: 25 February 2017
DOI: 10.1002/for.2470
RESEARCH ARTICLE
Adjusting for information content when comparing
forecast performance
Michael K Andersson1Ted Aranki1André Reslow2,3
1Finansinspektionen, Stockholm, Sweden
2Department of Economics, Uppsala
University,Uppsala, Sweden
3Sveriges Riksbank, Stockholm, Sweden
Correspondence
André Reslow, Monetary Policy
Department, Sveriges Riksbank, SE-103 37
Stockholm Sweden.
Email: andre.reslow@riksbank.se
Cross-institutional forecast evaluations may be severely distorted by the fact that
forecasts are made at different points in time and therefore with different amounts
of information. This paper proposes a method to account for these differences when
analyzing an unbalanced panel of forecasts. The method computes the timing effect
and the forecaster’s ability simultaneously. Monte Carlo simulation demonstrates
that evaluations that do not adjust for the differences in information content may
be misleading. In addition, the method is applied to a real-world dataset of 10
Swedish forecasters for the period 1999–2015. The results show that the ranking of
the forecasters is affected by the proposed adjustment.
KEYWORDS
forecast error, forecast comparison, publication time, evaluation, error component model, panel data
1INTRODUCTION
Many agents in the economy, including economic policy
makers, regularly publish forecasts of the development of
the economy. Since important economic and political deci-
sions are usually based on forecasts, it is crucial that these
predictions are as accurate as possible. Therefore, evalua-
tions are regularly undertaken to assess performance. Usually,
evaluations compare the forecast at a specific point in time
with the outcome, or using a statistical measure based on
the two quantities. One such measure is the mean absolute
error (MAE; see, e.g., Diebold, 2007; Gneiting, 2011). Albeit
informative, such evaluations are in general insufficient. A
large forecast error, for a particular observation, can be a
consequence of a shock that could not have been foreseen.
Comparing different agencies is a common wayto handle t his
problem. For example, central banks, international organiza-
tions, and other institutions publish evaluations of their fore-
casts and compare them with other forecasters (see e.g., Bank
of England, 2015; European Central Bank, 2013; Sveriges
Riksbank, 2016; Timmermann, 2007; Vogel, 2007). More
in-depth analyses of forecasts are presented at a less frequent
basis. For instance, Blix, Wadefjord, Wienecke, and Ådahl
(2001) compare different forecasters’ performance of key
Swedish macroeconomic variables. Andersson, Karlsson, and
Svensson (2007) estimate the Riksbank’s accuracy in relation
to that of the National Institute of Economic Research and
simple econometric specifications. Davies and Lahiri (1995)
and Bauer, Eisenbeis, Waggoner,and Zha (2003) compare the
agents of the Blue Chip Survey and Boero, Smith, and Wallis
(2008) assess the Bank of England Survey of External Fore-
casters. Goh and Lawrence (2006) compare the accuracy of
New Zealand forecasters’ root mean square errors and their
average relative rank. Cabanillas and Terzi (2012) present an
assessment of the European Commission’s track record and
compare the forecast errors of gross domestic product (GDP)
growth with those of other international institutions.
Comparing forecasters is appealing but not necessarily
straightforward. One problem that evaluations face is that
forecasts are published at different points in time. In practice,
this implies that different forecasters have different amounts
of information when they prepare their forecasts. To make
a fair comparison, this must be accounted for. One attempt
to reduce this problem is to compare forecasts produced at
almost the same point in time. This approach is far from
flawless since 1 month of information may be important,
for example if a quarterly national account figure is pub-
lished that particular month. Another problem is that this
784 Copyright © 2017 John Wiley & Sons, Ltd. wileyonlinelibrary.com/journal/for Journal of Forecasting.2017;36:784–794.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT