Reproducibility in Accounting Research: Views of the Research Community

Published date01 May 2020
AuthorMARK LANG,CHRISTIAN LEUZ,LUZI HAIL
Date01 May 2020
DOIhttp://doi.org/10.1111/1475-679X.12305
DOI: 10.1111/1475-679X.12305
Journal of Accounting Research
Vol. 58 No. 2 May 2020
Printed in U.S.A.
Reproducibility in Accounting
Research: Views of the Research
Community
LUZI HAIL,
MARK LANG,
AND CHRISTIAN LEUZ
ABSTRACT
We have little knowledge about the prevalence of irreproducibility in the ac-
counting literature. To narrow this gap, we conducted a survey among the
participants of the 2019 JAR Conference on their perceptions of the fre-
quency, causes, and consequences of irreproducible research published in
accounting journals. A majority of respondents believe that irreproducibility
is common in the literature, constitutes a major problem, and receives too lit-
tle attention. Most have encountered irreproducibility in the work of others
(although not in their own work) but chose not to pursue their failed repro-
duction attempts to publication. Respondents believe irreproducibility results
chiefly from career or publication incentives as well as from selective report-
ing of results. They also believe that practices like sharing code and data com-
bined with stronger incentives to replicate the work of others would enhance
reproducibility. The views of accounting researchers are remarkably similar
to those expressed in a survey by the scientific journal Nature. We conclude
by discussing the implications of our findings and provide several potential
paths forward for the accounting research community.
The Wharton School, University of Pennsylvania; University of North Carolina, Chapel
Hill; Booth School of Business, University of Chicago, and NBER.
We would like to thank Joachim Gassen for his feedback in developing the survey instru-
ment; Peggy Eppink for administering the survey; Caroline Sprecher for assistance in analyz-
ing the data; Phil Berger,Graeme Dean, Ivo Welch, and Regina Wittenberg-Moerman for their
thoughtful comments on earlier drafts of this paper; and Fabrizio Ferri and the participants
of the 2019 JAR Conference for their insights during and after the panel discussion. We note
that the opinions expressed in this paper reflect our own views and not necessarily those of
the editorships at the Journal of Accounting and Economics (for Lang) or the Journal of Account-
ing Research (for Hail and Leuz). An online appendix to this paper can be downloaded at
http://research.chicagobooth.edu/arc/journal-of-accounting-research/online-supplements.
519
CUniversity of Chicago on behalf of the Accounting Research Center, 2020
520 L.HAIL,M.LANG,AND C.LEUZ
JEL codes: A14; B41; M40; M41
Keywords: accounting research; ethics; expert survey; publication process;
research methodology; replication; reproducibility
The deepest trust in scientific knowledge comes from the ability to replicate empirical
findings directly and independently. (Camerer et al. [2016, p. 1433])
When scientists cannot confirm the results from a published study, to some it is an
indication of a problem, and to others, it is a natural part of the scientific pro-
cess that can lead to new discoveries. (National Academies of Sciences [2019,
p. 1])
Reproducibility is like brushing your teeth. [ ...]Itisgood for you, but it takes time
and effort. Once you learn it, it becomes a habit. (Baker [2016, p. 454])
1. Introduction
In recent years, many concerns have been raised about the reliability of
scientific publications in both the natural and the social sciences (e.g., Be-
gley [2013], Begley and Ioannidis [2015], National Academies of Sciences
[2016], Christensen and Miguel [2018]). The Reproducibility Project in
Psychology, for instance, finds that for only 39 out of 100 experimental
and correlational studies, the original results could be replicated (Open
Science Collaboration [2015]). Moreover, even the replicated effects were
half the magnitude of the original effects (see also Gilbert et al. [2016]). A
similar project in experimental economics finds that 61% out of 18 repli-
cated studies have significant effects in the original direction, well below
what is implied by reported p-values, and again the replicated effect size is
substantially smaller (Camerer et al. [2016]).1These concerns and the per-
ception of a reproducibility crisis led the renowned scientific journal Nature
to conduct an online survey asking 1,576 researchers about their views on
reproducibility in research (Baker [2016]). Slightly more than half the re-
spondents agree that there is a “significant crisis,” and an additional 38%
saw a “slight crisis.” Over 70% of the researchers reported to have tried and
failed to replicate another study.
Motivated by the same concerns, we are interested in accounting
researchers’ perceptions of the frequency, causes, and consequences of
irreproducible research results published in accounting journals. We
believe that our findings are important for at least two reasons. First, we are
1Another project focusing on social science experiments published in Nature or Science finds
a significant effect in the same direction as the original study for 13 (62%) out of 21 stud-
ies, and the effect size of the replications is on average about 50% of the original effect size
(Camerer et al. [2018]).

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT