Comparison of forecasting performances: Does normalization and variance stabilization method beat GARCH(1,1)‐type models? Empirical evidence from the stock markets

AuthorHamdi Emec,Emrah Gulay
Published date01 March 2018
Date01 March 2018
DOIhttp://doi.org/10.1002/for.2478
Received: 1 March 2015 Revised: 28 November 2016 Accepted: 20 April 2017
DOI: 10.1002/for.2478
RESEARCH ARTICLE
Comparison of forecasting performances: Does normalization
and variance stabilization method beat GARCH(1,1)-type
models? Empirical evidence from the stock markets
Emrah Gulay Hamdi Emec
Department of Econometrics, Dokuz Eylul
University,Buca, Izmir, Turkey
Correspondence
Emrah Gulay, Department of Econometrics,
Dokuz Eylul University IIBF,
Dokuzcesmeler Buca, Izmir 35160, Turkey.
Email: gulay.emrah@gmail.com
Abstract
In this paper, we present a comparison between the forecasting performances of
the normalization and variance stabilization method (NoVaS) and the GARCH(1,1),
EGARCH(1,1) and GJR-GARCH(1,1)models. Hence the aim of this study is to com-
pare the out-of-sample forecasting performances of the models used throughout the
study and to show that the NoVaS method is better than GARCH(1,1)-type models
in the context of out-of sample forecasting performance. Westudy the out-of-sample
forecasting performances of GARCH(1,1)-type models and NoVaS method based
on generalized error distribution, unlike normal and Student's t-distribution. Also,
what makes the study different is the use of the return series, calculated logarith-
mically and arithmetically in terms of forecasting performance. For comparing the
out-of-sample forecasting performances, we focused on different datasets, such as
S&P 500, logarithmic and arithmetic B˙
IST 100 return series. The key result of our
analysis is that the NoVaSmethod per formsbetter out-of-sample forecasting perfor-
mance than GARCH(1,1)-type models. The result can offeruseful guidance in model
building for out-of-sample forecasting purposes, aimed at improving forecasting
accuracy.
KEYWORDS
ARCH/GARCH models, financial time series, forecasting, forecasting performance measures, NoVaS,
volatility
1INTRODUCTION
It seems that many researchers who study modeling and fore-
casting volatility, which plays an important role in financial
markets, are becoming increasingly concerned over this field,
especially during the last few years. Portfoliomanagers, orga-
nizations dealing with buying of options, and market regula-
tors are interested in forecasting volatility with proper levels
of accuracy.This is one of t he main reasons whyforecasts that
have to be obtained bring into existence a basic component of
portfolio optimization, derivative pricing and value at risk.
The literature shows that various models have been used
to measure volatility in financial data. It is widely known
that the most successful of these models is the general-
ized autoregressive conditional heteroskedasticity (GARCH)
model, introduced by Bollerslev(1986). The basic idea, which
has an impact on the formation of the ARCH model sug-
gested by Engle (1982), has been shown to be the starting
point of GARCH models. Bollerslev developed the ARCH
models by adding features such as asymmetry, long mem-
ory or structural breaks to the structure of the ARCH model.
The major reason why GARCH models are important for
Journal of Forecasting.2018;37:133–150. wileyonlinelibrary.com/journal/for Copyright © 2017 John Wiley & Sons, Ltd. 133
134 GULAY AND EMEC
practitioners studying financial markets or other disciplines is
based on capturing facts about changing volatility,persistence
and volatility clustering.
Teräsvirta (1996) has shown that GARCH models show
poor forecasting performance. However, nonlinear time series
studies in the literature, such as Clements and Smith (1997),
have found that there are no statistical differences in terms
of forecasting performance when comparing simpler linear
models with nonlinear models, even if estimating under a true
model. At the same time, Andersen and Bollerslev (1998)
indicated in their studies that GARCH models actually show
good performance in forecasting volatility when selecting a
proxy variable, such as historical volatility, for unobservable
volatility. While Lundbergh and Teräsvirta (2002) and Van
Dijk et al. (2002) have reached similar results to Clements
and Smith, Malmsten and Teräsvirta (2004) haveachieved the
same results as Teräsvirta (1996).
The studies that concentrate on improving forecasting
performances of the models have been centered on the
assumption that the errors arise from distinct distributions.
Models estimated under the assumption that the errors fol-
low a normal distribution (ND) have been studied in the
literature for a long time. However, it seems that the mod-
els made their estimates under the assumption that the
errors follow Student's t(ST) or generalized error distribu-
tion (GED), which can be compared with each other with
respect to forecasting performance. The findings of the stud-
ies in the literature emphasize that forecasting performances
out-of-sample and in-sample are very different from each
other (Brzeszczynski & Welfe,2004; Loudon, Watt, & Yadav,
2000). For example, Franses and Ghijsels (1999) showed
that the GARCH(1,1) model, with ST, has the worst fore-
casting performance. In another study (Lopez, 2001) the
authors focused on exchange rates. They compared forecast-
ing performances of the GARCH(1,1) model under different
distributions. As a result of their study, they founddifferences
in forecasting performances. Accuracies in model specifi-
cations or assumptions related to varied distributions are
not sufficient in themselves for modeling volatility. Also, it
is quite clear that numerous forecasting performance mea-
surements have been used in many studies on modeling
volatility. The most common forecasting performance mea-
surements are mean square error (MSE) and mean absolute
error (MAE).
This paper aims to show that the GARCH(1,1) model has
good forecasting performance for squared returns, providing
accurate forecasting performance measurement; good predic-
tors are selected for the model. Also, this paper points out
that the NoVaS method is better than the GARCH(1,1)-type
models in terms of out-of-sample forecasting performance,
and there is no difference in using logarithmic return series
and arithmetic return series with respect to forecasting
volatility.
This paper is organized as follows. Section 2 explains
the concept of volatility; we discuss the reasons affecting
volatility and determinants of changes in volatility. Section 3
contains the literature review.In Section 4, we discuss NoVaS
methodology and present datasets. In Section 5 the empiri-
cal results are discussed. Finally,in Section 6, conclusions are
listed.
2CONCEPT OF VOLATILITY
Volatility, which has an important place in the discipline of
finance, has become a crucial topic for practitioners deal-
ing with financial markets or even the general audience
(Daly, 2011). To illustrate the importance of understanding
volatility, the stock market crash of 1987 in the USA may be
cited as an example.
On October 19, 1987, the Dow Jones Industrial Index
dropped by over508 points from 2246.7 to 1738.4 points. This
decline was the biggest 1-day drop in the Dow's history from
1885 until then. At the same time, this slump was the largest
percentage decline. Despite this, all attention was focused on
the absolute magnitude of the decline. Even though the Dow
Jones Industrial Index fell 190 points or 6.9% on October 13,
1989, this decline elicited reactions among people interested
in the financial markets. A majority of academicians work-
ing in the field of finance asserted that volatility needs to be
calculated as a percentage of change in price or return series
(Schwert, 1990).
One way to examine whether there are any influences on
volatility is to calculate volatility over various frequencies.
Observations based on history indicate that some volatility
clusters are long lived, lasting over 10-year periods, whereas
others are short lived, lasting only a few hours. The main
source of price changes in the market is news about the real
values of an asset. If such news comes out one after the
other, and the data have high frequency, including the news
source, return series show volatility clustering. The causes of
volatility are likely to be pressures and irregularities at higher
frequencies. These pressures and irregularities are mostly
named noises. Macroeconomic and institutional changes are
very likely to be causes of volatility at lower frequencies.
For instance, excess volatility is associated with macroeco-
nomic events in the 1930s. In general, the frequency tells us
about the kind of volatility clustering that will be seen in a
dataset. While low-frequency data allow only low-frequency
or macroeconomic fluctuations, high-frequency data reflect
characteristics of volatility much better (Demetrescu, 2007).
3LITERATURE REVIEW
Predicting and forecasting volatility has become one of the
most interesting topics in financial markets. It follows from

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT