Mincer–Zarnowitz quantile and expectile regressions for forecast evaluations under aysmmetric loss functions

Date01 September 2017
DOIhttp://doi.org/10.1002/for.2462
AuthorZhijie Xiao,Pin T. Ng,Kemal Guler
Published date01 September 2017
Received: 30 March 2016 Revised: 25 September 2016 Accepted: 31 January 2017
DOI: 10.1002/for.2462
RESEARCH ARTICLE
Mincer–Zarnowitz quantile and expectile regressions for forecast
evaluations under aysmmetric loss functions
Kemal Guler1Pin T. Ng2,3 Zhijie Xiao4,5
1Department of Economics, Faculty of
Economic and Administrative Sciences,
Anadolu University,Eskisehir, Turkey
2Franke College of Business, Northern
Arizona University, Flagstaff, AZ, USA
3School of Economics, Anhui University,
Hefei, China
4Department of Economics, Boston College,
Chestnut Hill, MA, USA
5Center for Economic Research, Shandong
University,Jinan, China
Correspondence
Zhijie Xiao, Department of Economics,
Boston College, Chestnut Hill, MA 02467,
USA.
Email: xiaoz@bc.edu
Abstract
Forecasts are pervasive in all areas of applications in business and daily life. Hence
evaluating the accuracy of a forecast is important for both the generators and con-
sumers of forecasts. There are two aspects in forecast evaluation: (a) measuring the
accuracy of past forecasts using some summary statistics, and (b) testing the opti-
mality properties of the forecasts through some diagnostic tests. On measuring the
accuracy of a past forecast, this paper illustrates that the summary statistics used
should match the loss function that was used to generate the forecast. If there is
strong evidence that an asymmetric loss function has been used in the generation of a
forecast, then a summary statistic that corresponds to that asymmetric loss function
should be used in assessing the accuracy of the forecast instead of the popular root
mean square error or mean absolute error. On testing the optimality of the forecasts, it
is demonstrated how the quantile regressions set in the prediction–realization frame-
work of Mincer and Zarnowitz (in J. Mincer (Ed.), Economic Forecasts and Expec-
tations: Analysis of Forecasting Behavior and Performance (pp. 14–20), 1969) can
be used to recover the unknown parameter that controls the potentially asymmetric
loss function used in generating the past forecasts. Finally,the prediction–realization
framework is applied to the Federal Reserve’seconomic growth forecast and forecast
sharing in a PC manufacturing supply chain. It is found that the Federal Reserve val-
ues overprediction approximately 1.5 times more costly than underprediction. It is
also found that the PC manufacturer weighs positiveforecast errors (under forecasts)
about four times as costly as negative forecast errors (over forecasts).
KEYWORDS
asymmetric loss, expectile regression, forecast evaluation, quantilereg ression
1INTRODUCTION
Forecasts are pervasive in all areas of business and daily
life. Weather forecasts are important for planning day-to-day
activities. Farmers rely on them for the planting and harvest-
ing of crops, while airline and cruise industries need them
to make decisions that maintain safety in the sky and the
sea. The insurance industry relies on them to form informed
pricing and capital decisions. Corporations use forecasting
to predict their future financial needs, production planning,
human resource planning, and so forth. Forecasts are used by
investors to value companies and their securities. The startup
of a new business requires forecasts of the demand for the
product, the expected shares in the market, the capacity of
competitor, the amount and sources for funds, and so forth.
In supply chain management, businesses have to synchronize
Journal of Forecasting.2017;36:651–679. wileyonlinelibrary.com/journal/for Copyright © 2017 John Wiley & Sons, Ltd. 651
652 GULER ET AL.
the ordering of supplies to meet the forecasted demand of
their customers. In government policy decisions, economic
forecasts are important for determining the appropriate mon-
etary/fiscal policies. In the health care industry, forecasts can
be used to target disease management or device personalized
health care based on predicted risk.
As a result, evaluating the accuracy of a forecast is impor-
tant for both the generators and consumers of forecasts. How-
ever, there is abundant evidence that many of the forecasts
being generated are inconsistent with the realizations of the
forecasted values. Silver (2012) discussed the weather indus-
try’s bias toward forecasting more precipitation than would
actually occur—what meteorologists call “wet bias.” Using
121 responses to a 26-question mail questionnaire sent to
the highest-ranking financial officers in 500 firms on the
Fortune 500 listing, Pruitt and Gitman (1987) found that
capital budgeting forecasts were optimisticallybiased by peo-
ple with work experience. Ali, Klein, and Rosenfeld (1992)
found that analysts set overly optimistic forecasts of the next
period’s annual earnings per shares. Lee, Padmanabhan, and
Whang (1997) and Cohen, Ho, Ren, and Terwiesch (2003)
provided ample evidence of overoptimistic forecasts across
industries ranging from electronics and semiconductors to
medical equipment and commercial aircraft in the supply
chains. In terms of economic variables forecasts, Capistrán
(2008) provided evidence that the Federal Reserve’s inflation
forecasts systematically underpredicted before PaulVolcker’s
appointment as Chairman and systematically overpredicted
afterwards until the second quarter of 1998.
Do these forecast biases signify suboptimal forecast per-
formance? In the traditional sense, overprediction or under-
prediction biases are indications of suboptimal forecasts.
However, the traditional tests for forecast optimality rely,
typically, on the assumption of the symmetric square error
loss function. Under this square error loss, overprediction
and underprediction are weighted equally, and optimal fore-
casts imply that the observed forecast errors will have a zero
bias and are uncorrelated with variables in the forecasters’
information set.
However, strong arguments can be provided for the ratio-
nale that forecasters might not haveadopted a symmetric error
loss function. For example, in firms’ forecasting of sales,
overpredictions will result in over-inventory and increased
insurance costs, and tied-up capital, whereas underpredic-
tions will lead to loss of goodwill, reputation, and current and
future sales. Firms may decide that the cost of loss of good-
will is much higher than increased insurance costs. Therefore,
they weigh the underprediction errors more than the overpre-
diction errors. For money managers of banks, overpredicting
the value-at-risk ties up more capital than necessary, whereas
underpredicting leads to regulatory penalties and the need
for increased capital provisions. They may conclude that the
cost of increased capital provisions is higher than the cost of
tied-up capital and decide to weigh the underprediction errors
more than the overprediction errors. It might be particularly
costly for the Federal Reserve to overpredict gross domestic
product (GDP) growth when growth is already slow,signaling
a false recovery,which could lead to an overly tight monetary
policy at exactly the wrong time. The cost of overforecasting
is not always the same as that of underforecasting. The dis-
satisfaction that people have when the weatherman forecasts
a sunny day but it turns out to be a rainy day and, hence, ruin
a picnic party, is higher than when it is forecasted to be rainy
but turns out to be sunny. This can be the explanation for the
wet bias and illustrate the asymmetric loss function used by
weathermen when performing their forecasts.
Keane and Runkle argued that
If forecasters havedifferential costs of over- and
underprediction, it could be rational for them to
produce biased forecasts. If we were to find that
forecasts are biased, it could still be claimed that
forecasters were rational if it could be shown
that they had such differential costs. (Keane &
Runkle, 1990, p. 719)
Varian(1974), Waud (1976), Zellner (1986), Christoffersen
and Diebold (1997), and Patton and Timmermann (2007) all
argued that the presence of forecast bias is not necessarily an
indication of suboptimal forecast. Rostek (2010) provided a
foundation for the practical and theoretical justifications for
the assignment of different weights for overprediction and
underprediction by a forecaster. Inspired by prior works of
Manski (1988) and Chambers (2007), Rostek (2010) formal-
ized the concept of quantile maximization in choice-theoretic
language in choice theory and demonstrated the character-
istics of robustness and ordinality and its advantages when
compared to the traditional moments-based decision criteria.
The specification of a quantile maximizer provides a sys-
tematic definition of riskiness in terms of downside risk and
upside chance (losses and gains) in a forecaster’s asymmet-
ric preference towards overprediction and underprediction. In
an attempt to improve the forecast performance of predicting
state tax revenues in Iowa, Lewis and Whiteman (2015) pro-
vided the example that the Institute for Economic Research
at the University of Iowa had used an asymmetric loss func-
tion that treated forecasted revenueshor tfallsd=1,2,,10
times as costly as equal-sized surpluses.
Numerous articles have argued for the likelihood of, and
addressed the issues related to, asymmetric loss functions
being used by forecasters (see, e.g., Artis & Marcellino, 2001;
Batchelor & Peel, 1998; Carpistrán, 2006; Christoffersen &
Diebold, 1997; Elliott & Timmermann, 2008; Granger, 1969,
1999; Granger & Newbold, 1986; Granger & Pesaran, 2000;
Ito, 1990; Patton & Timmermann, 2007; Pesaran & Skouras,
2002; Varian, 1974;Weiss, 1996; West, Edison, & Cho, 1993;
Zellner, 1986).

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT