Writing Energy Economics Research for Impact.

AuthorDowling, Michael

    Energy economics researchers, like all researchers, are motivated to generate impact from their research. Impact can be defined both broadly in terms of the beneficial contribution of research to society (Penfield et al., 2014), as well as more narrowly in terms of the contribution the research makes to the development of knowledge within a discipline (Li, Liao and Yen, 2013). Our focus in this study is on the latter definition and we measure research impact, similar to Li, Liao and Yen (2013), through citations to a published article. We investigate the non-topic drivers that contribute to these future citations and interpret our findings to advise authors of energy economics articles on effective writing style and article structure.

    Citations to an article generally demonstrate that the research has stimulated theoretical, empirical, or policy discussion in future research. As a result, citations to a researcher's body of work are important in career promotion processes as part of an assessment of research contribution (Reinstein et al., 2011). They also act as a form of intrinsic motivation by showing the researcher they are contributing to the development of knowledge in their field.

    Researchers are, therefore, motivated to produce research that generates citations. Primarily this involves creating contributions that advance knowledge and understanding. But individual articles must also attract the attention of researchers who might build on their ideas. Consider a reasonable peer group of reputable sources for energy economics research comprising The Energy Journal, Energy Economics, and the Journal of Environmental Economics and Management. There were over 2,000 articles published by just these three journals in the last five years. There is a crowded marketplace of energy economics ideas, and new research somehow needs to attract the attention of researchers who will cite that research to build their research arguments. The working hypothesis of this study, therefore, argues that better written and structured energy economics articles will be more likely to be noticed and ultimately cited in a busy research environment characterized by limited researcher attention.

    Having one's research accepted by journals with a high reputation is one means of generating attention and signaling the importance and relevance of the research. The case in point of this study is energy economics research published in The Energy Journal. Our study sample shows that articles in The Energy Journal generated an average of 11 citations in the five years after publication. The journal is ranked in the first quartile of economics journals in the ISI Journal Citation Report. It is one of the two joint-top field journals for energy and environmental economics from a list of 61 ranked journals in the 'Agricultural, Environmental and Energy Economics' category of the Centre National de la Recherche Scientifique (CNRS) in France. It holds similarly elevated rankings in other national research journal ranking systems. However, not every article published in the journal has an equal impact, a feature in common with all journals. The median number of citations is less than the average at 7 cites per article. Thirty percent of articles receive 3 or fewer citations, and the top 10 percent of the most cited articles account for 37 percent of all citations. A sizable minority of articles are therefore not particularly impactful, and a small number of articles deliver an out-sized influence.

    In this study, we systematically analyze articles published in The Energy Journal to show how non-topic factors can influence future citations in energy economics. We show, using regressions of citations on non-topic factors in articles published in The Energy Journal between 1996 and 2013, that about 20 percent of the variation in future citations is related to these measures. Our findings, therefore, cover an important range of factors in determining research impact: writing style and article structure matters. To carry out our study, we build on the scientometrics literature to develop measures of writing and structural choices in research publications. Scientometrics is a research field that involves the analysis of scientific literature, including determining important factors in generating research impact. The initial focus is on the first information that a potential reader sees when considering whether to read an article (title, abstract, topic). A second focus is on the structural choices in framing and writing the article itself (writing style, presentation, and references). The final focus is on author characteristics. While our study has a statistical analysis as its foundation, we present the findings in the form of advice for future writers.

    In designing this study, we are conscious that an important overall finding of the scientometrics literature is that there is no one size fits all approach that works in writing impactful research (Tahamtan, Afshar and Ahamdzadeh, 2016). For example, papers in sociology with short titles receive more citations than papers with long titles, while the opposite is true of medical research (van Wesel, Wyatt and ten Haaf, 2014). There also tend to be particular patterns of conformity that signal belonging to a research group (Walker, 2010), such as structuring research in a certain fashion or citing from an informally agreed set of sources. Thus, what works in broad scientometrics with its study of very large corpora of articles, does not necessarily work at the individual journal level. Our research, therefore, draws from the broad scientometrics perspective but allows a specific understanding of how writing decisions in energy economics research influences future citations.

    Our study is related to popular guidance on the importance of good writing in economics, most notably by McCloskey (1985, 2019), and in energy science (Weiss and Newman, 2011). Our contribution is the integration of this guidance through a quantitative scientometrics investigation of the relationship between article features and future citations. This is the first study to carry out such a quantitative investigation in energy economics, and the most comprehensive study of its kind in the broader economics field. Doing so results in more qualified guidance based on empirics compared to prior wide-ranging economics writing guides, and richer guidance personalized to energy economics than the scientometric studies. Our methodological approach is quite close to Dowling, Hammami and Zreik (2018), who analyze the impact of article features on citations for Economics Letters articles, but we significantly expand on that study, which just examined three of the 19 article features that this study examines. As the ultimate aim of this study is to highlight the features of writing and structure that influence impact, we write up our results in the form of a writing guide, integrating the relevant findings in the justification for the advice. The next section describes the data and testing approach, and the following section presents the findings and guidance.


    Using Scopus, we identify all articles published in The Energy Journal from 1996 to 2013. Information in Scopus was significantly incomplete before 1996 and 2013 is the latest publication year possible because citations up to five years after publication (up to the end of 2018) is our main dependent variable. We only include documents of type 'articles,' thus excluding other document types that the journal occasionally publishes such as 'reviews' and 'editorials.' Excluding also articles with incomplete information for at least one important variable, we are left with a final sample of 504 articles.

    The main dependent variable (DV), 5-year Citations, includes all Scopus citations to an article except self-citations, in the first five years following publication. On average, there are 10.6 citations per article, with a median of seven citations. There is also a notable skew in the citation distribution, with the top 10% of articles contributing 37% of total citations. We need to account for this skewed distribution in our testing, so we calculate the DV as the inverse hyperbolic sine (asinh) of citations (Card and DellaVigna, 2017; Dowling, Hammami and Zreik, 2018). We also construct some additional DVs. To test shorter time-period citations, we use 3-year Citations, and for a longer time period, we construct 10-year Citations. We also test, using a dummy variable construction, whether the top 25 percent of most cited articles have particular features that are related to citation success.

    The testing approach is OLS regressions of the asinh-transformed counts of citations DVs and probit regressions of dummy DVs coded 1 if an article is among the top 25 percent most-cited articles and 0 otherwise. All independent variables (IVs), described below, are included in the regressions. As the range of IVs is quite large, we apply general-to-specific (GETS) modeling (Campos, Ericsson and Hendry, 2005; Hansen, 1996) to arrive at a more parsimonious model through implementing the Stata genspec package of Clarke (2014). Lastly, we calculate the elasticities of significant independent variables to determine practical importance.

    Our IVs are generally measures of the non-topic features of an article, except for one...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT