Efficient selection of hyperparameters in large Bayesian VARs using automatic differentiation

Published date01 September 2020
AuthorLiana Jacobi,Dan Zhu,Joshua C. C. Chan
DOIhttp://doi.org/10.1002/for.2660
Date01 September 2020
Received: 10 September 2019 Accepted: 12 January 2020
DOI: 10.1002/for.2660
RESEARCH ARTICLE
Efficient selection of hyperparameters in large Bayesian
VARs using automatic differentiation
Joshua C. C. Chan1,2 Liana Jacobi3Dan Zhu4
1Department of Economics, Purdue
University, WestLafayette, Indiana
2Economics Discipline Group, University
of Technology Sydney, Sydney, NSW,
Australia
3Department of Economics, University of
Melbourne, Australia
4Department of Business Statistics and
Econometrics, Monash University,
Clayton, Victoria, Australia
Correspondence
Dan Zhu, Department of Business
Statistics and Econometrics, Monash
University, Clayton,3168 Victoria,
Australia.
Email: dan.zhu@monash.edu
Abstract
Large Bayesian vector autoregressions with the natural conjugate prior are now
routinely used for forecasting and structural analysis. It has been shown that
selecting the prior hyperparameters in a data-driven manner can often substan-
tially improve forecast performance. We propose a computationally efficient
method to obtain the optimal hyperparameters based on automatic differentia-
tion, which is an efficient way to compute derivatives. Using a large US data set,
we show that using the optimal hyperparameter valuesleads to substantially bet-
ter forecast performance. Moreover, the proposed method is much faster than
the conventional grid-search approach, and is applicable in high-dimensional
optimization problems. The new method thus provides a practical and sys-
tematic way to develop better shrinkage priors for forecasting in a data-rich
environment.
KEYWORDS
automatic differentiation, forecasts, marginal likelihood, optimal hyperparameters, vector autore-
gression
1INTRODUCTION
Since the seminal paper of Banbura, Giannone, and
Reichlin (2010) showed that it is feasible to estimate
large Bayesian vector autoregressions (BVARs) with over
100 variables, there has been a lot of interest in using
large BVARs for forecasting and structural analysis. A
few prominent examples include Carriero, Kapetanios,
and Marcellino (2009), Koop (2013), Koop and Korobilis
(2013), and Banbura et al. (2013). One key aspect of these
large BVARs is the use of shrinkage priors that formally
incorporate sensible nondata information, and one popu-
lar way to do so is the Minnesota-type natural conjugate
prior that gives rise to a range of analytical results, includ-
ing closed-form expressions of the marginal likelihood.1
1Early seminal works of shrinkage priors were developed by Doan, Lit-
terman, and Sims (1984) and Litterman (1986). Similar shrinkage priors
for structural VARsare formulated in Leeper et al. (1996) and Sims and
Zha (1998).
These analytical results are later used in Carriero, Clark,
and Marcellino (2016) and Chan (2020) to develop efficient
sampling algorithms to estimate large BVARs with flexible
error covariance structures, such as stochastic volatility,
serially correlated and non-Gaussian errors.
The natural conjugate prior depends on a few hyperpa-
rameters that control the degree of shrinkage, and they
are typically fixed at some subjectively chosen values.
Alternatively, a data-based approach to select these hyper
parameters might be more appealing as it reduces the
number of important subjective choices required from
the user. For example, Del Negro and Schorfheide (2004),
Schorfheide and Song (2015), and Carriero et al. (2015)
obtain the optimal hyperparameters by maximizing the
marginal likelihood over a grid of possible values.2This
2Giannone, Lenza, and Primiceri (2015) show that a data-based approach
of selecting the hyperparameters—comparedto the convectional method
of fixing them to some ad hoc values—can substantially improve the
forecast performance of large BVARs.
Journal of Forecasting. 2020;39:934–943.
wileyonlinelibrary.com/journal/for© 2020 John Wiley & Sons, Ltd.
934

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT