Judgmental selection of forecasting models

AuthorNikolaos Kourentzes,Fotios Petropoulos,Enno Siemsen,Konstantinos Nikolopoulos
Date01 May 2018
Published date01 May 2018
DOIhttp://doi.org/10.1016/j.jom.2018.05.005
Contents lists available at ScienceDirect
Journal of Operations Management
journal homepage: www.elsevier.com/locate/jom
Judgmental selection of forecasting models
Fotios Petropoulos
a
, Nikolaos Kourentzes
b
, Konstantinos Nikolopoulos
c
, Enno Siemsen
d,
a
School of Management, University of Bath, UK
b
Lancaster University Management School, Lancaster University, UK
c
Bangor Business School, Bangor University, UK
d
Wisconsin School of Business, University of Wisconsin, USA
ARTICLE INFO
Keywords:
Model selection
Behavioral operations
Decomposition
Combination
ABSTRACT
In this paper, we explored how judgment can be used to improve the selection of a forecasting model. We
compared the performance of judgmental model selection against a standard algorithm based on information
criteria. We also examined the ecacy of a judgmental model-build approach, in which experts were asked to
decide on the existence of the structural components (trend and seasonality) of the time series instead of directly
selecting a model from a choice set. Our behavioral study used data from almost 700 participants, including
forecasting practitioners. The results from our experiment suggest that selecting models judgmentally results in
performance that is on par, if not better, to that of algorithmic selection. Further, judgmental model selection
helps to avoid the worst models more frequently compared to algorithmic selection. Finally, a simple combi-
nation of the statistical and judgmental selections and judgmental aggregation signicantly outperform both
statistical and judgmental selections.
1. Introduction
Planning processes in operations - e.g., capacity, production, in-
ventory, and materials requirement plans - rely on a demand forecast.
The quality of these plans depends on the accuracy of this forecast. This
relationship is well documented (Gardner, 1990;Ritzman and King,
1993;Sanders and Graman, 2009;Oliva and Watson, 2009). Small
improvements in forecast accuracy can lead to large reductions in in-
ventory and increases in service levels. There is thus a long history of
research in operations management that examines forecasting processes
(Seifert et al., 2015;Nenova and May 2016;van der Laan et al., 2016,
are recent examples).
Forecasting model selection has attracted considerable academic
and practitioner attention during the last 30 years. There are many
models to choose from dierent forms of exponential smoothing,
autoregressive integrated moving average (ARIMA) models, neural
nets, etc. and forecasters in practice have to select which one to use.
Many academic studies have examined dierent statistical selection
methodologies to identify the best model; the holy grail in forecasting
research (Petropoulos et al., 2014). If the most appropriate model for
each time series can be determined, forecasting accuracy can be sig-
nicantly improved (Fildes, 2001), typically by as much as 2530%
(Fildes and Petropoulos, 2015).
In general, forecasting software recommends or selects a model
based on a statistical algorithm. The performance of candidate models
is evaluated either on in-sample data, usually using appropriate in-
formation criteria (Burnham and Anderson, 2002), or by withholding a
set of data points to create a validation sample (out-of-sample evalua-
tion, Ord et al., 2017, also known as cross-validated error). However, it
is easy to devise examples in which statistical model selection (based
either on in-sample or out-of-sample evaluation) fails. Such cases are
common in real forecasting applications and thus make forecasting
model selection a non-trivial task in practice.
Practitioners can apply judgment to dierent tasks within the
forecasting process, namely:
1. denition of a set of candidate models,
2. selection of a model,
3. parametrization of models,
4. production of forecasts, and
5. forecast revisions/adjustments.
Most of the attention in the judgmental forecasting literature fo-
cuses on the latter two tasks. Experts are either asked to directly esti-
mate the point forecasts of future values of an event or a time series (see
for example Hogarth and Makridakis, 1981;Petropoulos et al., 2017),
or they are asked to adjust (or correct) the estimates provided by a
statistical method in order to take additional information into account;
https://doi.org/10.1016/j.jom.2018.05.005
Received 3 October 2017; Received in revised form 22 May 2018; Accepted 23 May 2018
Corresponding author.
E-mail address: esiemsen@wisc.edu (E. Siemsen).
Journal of Operations Management 60 (2018) 34–46
Available online 18 June 2018
0272-6963/ © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/BY/4.0/).
T

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT