Is There Really Granger Causality between Energy Use and Output?

AuthorBruns, Stephan B.
  1. INTRODUCTION

    The literature on Granger causality between energy and economic output consists of hundreds of papers. But despite attempts to review and organize this literature (e.g. Ozturk, 2010; Payne, 2010a), the nature of the relationship between the variables remains unclear (Stern, 2011). In this paper, we carry out a meta-analysis of this literature. Our goal is to determine whether there is a genuine causal relation between energy use and output or whether the large number of apparently significant results is due to publication or misspecification bias. It is important to understand these relationships because of the general role of energy in economic production and growth (Stern, 2011), the ongoing debate about the effect of energy price shocks on the economy (Hamilton, 2009), and the important role of energy in climate change policy.

    Meta-analysis is a method for aggregating the results of many individual empirical studies in order to increase statistical power and remove confounding effects (Stanley, 2001). Simple averaging of coefficients or test statistics across studies is, however, plagued by the effects of publication and misspecification biases. Publication bias is the tendency of authors and journals to preferentially publish statistically significant or theory-conforming results (Card and Krueger, 1995). In the worst-case scenario, there may be no real effect in the data and yet studies that find statistically significant results are published. This has led a prominent meta-analyst to claim that: "most published research findings are false" (Ioannidis, 2005). Granger causality techniques have been widely applied in many areas of economics including monetary policy (Lee and Yang, 2012), finance and economic development (Ang, 2008a), and energy economics (Ozturk, 2010), as well as in other fields such as climate change (e.g. Kaufmann and Stern, 1997) and neuroscience (Bressler and Seth, 2011). But the results of Granger causality testing are frequently fragile and unstable across specifications (Lee and Yang, 2012; Ozturk, 2010; Stern, 2011; Payne, 2010b). In this paper, we show how meta-analysis can be used to test for genuine effects, publication, and misspecification biases in Granger-causality studies. The methods we use in this paper should be applicable to other areas of research that use Granger causality testing and possibly in the meta-analysis of studies using other econometric methods.

    We modify the standard FAT-PET meta-regression model used in economics (Stanley and Doucouliagos, 2012) to meta-analyze Granger causality test statistics. The FAT-PET model regresses t-statistics from individual studies on the inverse of the standard errors of the regression coefficients of each study. If there is a genuine effect in the literature--a non-zero regression parameter--the coefficient of the inverse of the standard error will be non-zero, as the t-statistics will increase as the standard error declines in larger samples. This is the precision-effect test (PET). The intercept term is used to test for the presence of publication bias--the so-called funnel asymmetry test (FAT). Granger causality tests present three challenges to using the standard FAT-PET model. The first is that the usual restriction test statistics have an F or chi-squared distribution. These must be converted to statistics with a common distribution with properties that are suitable for regression analysis. We transform the p-values of the original test statistics to standard normal variates using the probit transformation. (1) The standard normal distribution is also better for meta-regression analysis than the commonly used t-distribution because the standard normal distribution is unaffected by the degrees of freedom. The second challenge is that these test statistics do not have associated standard errors. Therefore, our meta-regression model replaces the inverse of the standard error with the square root of the degrees of freedom of the regressions in the underlying studies. The third challenge is the tendency for researchers to over-fit vector autoregression (VAR) models in small samples (Gonzalo and Pitarakis, 2002). These over-fitted models tend to result in over-rejection of the null hypothesis of Granger non-causality when it is false, especially in small samples (Zapata and Rambaldi, 1997). We control for these effects by including the number of degrees of freedom lost in fitting the underlying models as a control variable.

    A recent exploratory meta-analysis of 174 pairs of tests (each pair tests whether energy causes output and vice versa) from 39 studies uses a multinomial logit model to test the effect of some sample characteristics and methods used on the probability of finding Granger causality between energy and output in each direction (Chen et al., 2012). Chen et al. (2012) conclude that researchers are more likely to find that output causes energy in developing countries and that energy causes output in OPEC and Kyoto Annex 1 countries. Additionally, output is more likely to cause energy in larger countries and in studies with more recent data, but higher total energy use is likely to result in finding that energy causes output. They also find that the standard Granger causality test is more likely to find causality in some direction than are alternative methods. Though these findings are interesting, Chen et al. (2012) do not address whether the causality tests represent a sample of valid statistical tests or are the possibly spurious outcomes of publication and misspecification bias. In this paper, we test for whether there are actual genuine effects in this literature rather than just misspecification and publication selection biases. Additionally, we use a larger sample consisting of 574 pairs of causality tests from 72 studies selected from this vast literature of more than 500 papers. Our selection of papers is based on clearly defined and documented criteria.

    The first part of our paper outlines our model for testing for genuine effects and publication and misspecification biases in Granger causality literatures. We then describe the choice of studies for our meta-analysis, followed by an exploratory analysis of the data. This includes a description of the data, a correlation analysis, and a basic meta-regression analysis. This analysis finds no genuine effect in the meta-sample as a whole but also shows the likelihood of severe misspecification biases. We then apply models that control for these misspecification biases to both the data as a whole, and using dummy variables, to various subsets of the literature. We still find that there is no genuine effect in the literature as a whole but that models that include energy prices as a control variable have a genuine effect from output to energy use. Other effects are more fragile or ambiguous. The final section provides some suggestions and recommendations for future research.

  2. METHODS

    2.1 Testing for Genuine Effects

    In the absence of publication and misspecification biases, and abstracting from genuine heterogeneity, the estimated effect size, [??],--in econometrics typically a regression coefficient of interest--should have the same expected value across different studies irrespective of their degrees of freedom, DF. The precision, [mathematical expression not reproducible], of a consistent estimator of the effect size tends to increase linearly with the square root of the degrees of freedom, as the parameter estimate converges in probability to the true value. Therefore, assuming for simplicity that the null hypothesis is [beta] = 0, if there is a genuine non-zero effect, the absolute value of the related t-statistic should increase linearly with the square root of the degrees of freedom:

    [mathematical expression not reproducible] (1)

    where i indexes individual test statistics (2) and [alpha] has the same sign as the underlying effect, [beta]. The errors, [u.sub.i], are predictably heteroskedastic, as the variance of the t-distribution increases as the degrees of freedom decreases for low numbers of degrees of freedom. Card and Krueger (1995) and Stanley (2005a) suggest estimating a logarithmic version of (1), which Stanley calls meta-significance testing (MST):

    ln |[t.sub.i]| = ln [a.sub.0] + [a.sub.1] ln[DF.sub.i] + [[epsilon].sub.i] (2)

    Rejecting the null-hypothesis that [a.sub.1] = 0 suggests that there is a genuine effect in the meta-sample. However, this functional form is undesirable. First, the heteroskedasticity of the t-distribution may introduce an undesirable negative correlation between the dependent variable and the degrees of freedom for low degrees of freedom. Second, due to taking absolute values and logarithms the error term will not have a symmetric distribution, and will also be heteroskedastic if there is a genuine effect. Finally, though Stanley (2008) found (2) to be very powerful in large meta-samples even in the presence of publication biases, this test suffers from inflation of type I errors (Stanley, 2008; Stanley and Doucouliagos, 2012).

    If all results are equally likely to be accepted for publication, there should be no relation between the estimated effect size and its standard error. However, if journals will only publish, or authors only submit for publication, statistically significant results, then, the lower the precision of estimation is, the larger reported effect sizes must be in order to achieve a given p-value and be published. This suggests a second meta-regression model:

    [mathematical expression not reproducible] (3)

    The test of [[gamma].sub.1] = 0, which Stanley (2005a) calls the funnel asymmetry test (FAT), is a test for publication bias, while [[gamma].sub.0] is an estimate of the value of the genuine effect adjusted for the publication bias. This relationship is exact when the genuine effect is zero (Stanley and Doucouliagos, 2011) and, therefore, is a suitable model for testing...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT