Journal of Forecasting
- Publication date:
- Nbr. 38-6, September 2019
- Nbr. 38-5, August 2019
- Nbr. 38-4, July 2019
- Nbr. 38-3, April 2019
- Nbr. 38-2, March 2019
- Nbr. 38-1, January 2019
- Nbr. 37-8, December 2018
- Nbr. 37-7, November 2018
- Nbr. 37-6, September 2018
- Nbr. 37-5, August 2018
- Nbr. 37-4, July 2018
- Nbr. 37-3, April 2018
- Nbr. 37-2, March 2018
- Nbr. 37-1, January 2018
- Nbr. 36-8, December 2017
- Nbr. 36-7, November 2017
- Nbr. 36-6, September 2017
- Nbr. 36-5, August 2017
- Nbr. 36-4, July 2017
- Nbr. 36-3, April 2017
- A note on the predictive power of survey data in nowcasting euro area GDP
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.
- Oil financialization and volatility forecast: Evidence from multidimensional predictors
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk.
- WTI crude oil option implied VaR and CVaR: An empirical application
Using option market data we derive naturally forward‐looking, nonparametric and model‐free risk estimates, three desired characteristics hardly obtainable using historical returns. The option‐implied measures are only based on the first derivative of the option price with respect to the strike price, bypassing the difficult task of estimating the tail of the return distribution. We estimate and backtest the 1%, 2.5%, and 5% WTI crude oil futures option‐implied value at risk and conditional value at risk for the turbulent years 2011–2016 and for both tails of the distribution. Compared with risk estimations based on the filtered historical simulation methodology, our results show that the option‐implied risk metrics are valid alternatives to the statistically based historical models.
- An ensemble of LSTM neural networks for high‐frequency stock market classification
We propose an ensemble of long–short‐term memory (LSTM) neural networks for intraday stock predictions, using a large variety of technical analysis indicators as network inputs. The proposed ensemble operates in an online way, weighting the individual models proportionally to their recent performance, which allows us to deal with possible nonstationarities in an innovative way. The performance of the models is measured by area under the curve of the receiver operating characteristic. We evaluate the predictive power of our model on several US large‐cap stocks and benchmark it against lasso and ridge logistic classifiers. The proposed model is found to perform better than the benchmark models or equally weighted ensembles.
- Trading volume and prediction of stock return reversals: Conditioning on investor types' trading
We show that contrasting results on trading volume's predictive role for short‐horizon reversals in stock returns can be reconciled by conditioning on different investor types' trading. Using unique trading data by investor type from Korea, we provide explicit evidence of three distinct mechanisms leading to contrasting outcomes: (i) informed buying—price increases accompanied by high institutional buying volume are less likely to reverse; (ii) liquidity selling—price declines accompanied by high institutional selling volume in institutional investor habitat are more likely to reverse; (iii) attention‐driven speculative buying—price increases accompanied by high individual buying‐volume in individual investor habitat are more likely to reverse. Our approach to predict which mechanism will prevail improves reversal forecasts following return shocks: An augmented contrarian strategy utilizing our ex ante formulation increases short‐horizon reversal strategy profitability by 40–70% in the US and Korean stock markets.
- Forecasting economic indicators using a consumer sentiment index: Survey‐based versus text‐based data
Given the confirmed effectiveness of the survey‐based consumer sentiment index (CSI) as a leading indicator of real economic conditions, the CSI is actively used in making policy judgments and decisions in many countries. However, although the CSI offers qualitative information for presenting current conditions and predicting a household's future economic activity, the survey‐based method has several limitations. In this context, we extracted sentiment information from online economic news articles and demonstrated that the Korean cases are a good illustration of applying a text mining technique when generating a CSI using sentiment analysis. By applying a simple sentiment analysis based on the lexicon approach, this paper confirmed that news articles can be an effective source for generating an economic indicator in Korea. Even though cross‐national comparative research results are suited better than national‐level data to generalize and verify the method used in this study, international comparisons are quite challenging to draw due to the necessary linguistic preprocessing. We hope to encourage further cross‐national comparative research to apply the approach proposed in this study.
- Issue Information
No abstract is available for this article.
- Information content of DSGE forecasts
This paper examines the question whether information is contained in forecasts from dynamic stochastic general equilibrium (DSGE) models beyond that contained in lagged values, which are extensively used in the models. Four sets of forecasts are examined. The results are encouraging for DSGE forecasts of real GDP. The results suggest that there is information in the DSGE forecasts not contained in forecasts based only on lagged values, and that there is no information in the lagged‐value forecasts not contained in the DSGE forecasts. The opposite is true for forecasts of the GDP deflator.
- Predictive power of Markovian models: Evidence from US recession forecasting
This paper provides extensions to the application of Markovian models in predicting US recessions. The proposed Markovian models, including the hidden Markov and Markov models, incorporate the temporal autocorrelation of binary recession indicators in a traditional but natural way. Considering interest rates and spreads, stock prices, monetary aggregates, and output as the candidate predictors, we examine the out‐of‐sample performance of the Markovian models in predicting the recessions 1–12 months ahead, through rolling window experiments as well as experiments based on the fixed full training set. Our study shows that the Markovian models are superior to the probit models in detecting a recession and capturing the recession duration. But sometimes the rolling window method may affect the models' prediction reliability as it could incorporate the economy's unsystematic adjustments and erratic shocks into the forecast. In addition, the interest rate spreads and output are the most efficient predictor variables in explaining business cycles.
- The total cost of misclassification in credit scoring: A comparison of generalized linear models and generalized additive models
This study examines whether the evaluation of a bankruptcy prediction model should take into account the total cost of misclassification. For this purpose, we introduce and apply a validity measure in credit scoring that is based on the total cost of misclassification. Specifically, we use comprehensive data from the annual financial statements of a sample of German companies and analyze the total cost of misclassification by comparing a generalized linear model and a generalized additive model with regard to their ability to predict a company's probability of default. On the basis of these data, the validity measure we introduce shows that, compared to generalized linear models, generalized additive models can reduce substantially the extent of misclassification and the total cost that this entails. The validity measure we introduce is informative and justifies the argument that generalized additive models should be preferred, although such models are more complex than generalized linear models. We conclude that to balance a model's validity and complexity, it is necessary to take into account the total cost of misclassification.
- Realized Volatility Forecast of Stock Index Under Structural Breaks
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock...
- Forecasting with a DSGE Model of a Small Open Economy within the Monetary Union
ABSTRACTIn this paper we lay out a two‐region dynamic stochastic general equilibrium (DSGE) model of an open economy within the European Monetary Union. The model, which is built in the New Keynesian tradition, contains real and nominal rigidities such as habit formation in consumption, price and...
- A Time‐Simultaneous Prediction Box for a Multivariate Time Series
A sample‐based method in Kolsrud (Journal of Forecasting 2007; 26(3): 171–188) for the construction of a time‐simultaneous prediction band for a univariate time series is extended to produce a variable‐ and time‐simultaneous prediction box for a multivariate time series. A measure of distance based ...
- How Informative are the Subjective Density Forecasts of Macroeconomists?
In this paper, we propose a framework to evaluate the subjective density forecasts of macroeconomists using micro data from the euro area Survey of Professional Forecasters (SPF). A key aspect of our analysis is the use of evaluation measures which take account of the entire predictive densities,...
- Multivariate Forecasting with BVARs and DSGE Models
In this paper I assess the ability of Bayesian vector autoregressions (BVARs) and dynamic stochastic general equilibrium (DSGE) models of different size to forecast comovements of major macroeconomic series in the euro area. Both approaches are compared to unrestricted VARs in terms of multivariate ...
- Forecasting High‐Frequency Risk Measures
This article proposes intraday high‐frequency risk (HFR) measures for market risk in the case of irregularly spaced high‐frequency data. In this context, we distinguish three concepts of value‐at‐risk (VaR): the total VaR, the marginal (or per‐time‐unit) VaR and the instantaneous VaR. Since the...
- Estimating the Out‐of‐Sample Predictive Ability of Trading Rules: A Robust Bootstrap Approach
In this paper, we provide a novel way to estimate the out‐of‐sample predictive ability of a trading rule. Usually, this ability is estimated using a sample‐splitting scheme, true out‐of‐sample data being rarely available. We argue that this method makes poor use of the available data and creates...
- Out‐of‐sample equity premium prediction: A scenario analysis approach
We propose two methods of equity premium prediction with single and multiple predictors respectively and evaluate their out‐of‐sample performance using US stock data with 15 popular predictors for equity premium prediction. The first method defines three scenarios in terms of the expected returns...
- Predicting Stock Return Volatility: Can We Benefit from Regression Models for Return Intervals?
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They...
- Measuring the market risk of freight rates: A forecast combination approach
This paper addresses the issue of freight rate risk measurement via value at risk (VaR) and forecast combination methodologies while focusing on detailed performance evaluation. We contribute to the literature in three ways: First, we reevaluate the performance of popular VaR estimation methods on...