No System Is Perfect: Understanding How Registration‐Based Editorial Processes Affect Reproducibility and Investment in Research Quality

DOIhttp://doi.org/10.1111/1475-679X.12208
AuthorKRISTINA RENNEKAMP,ROBERT BLOOMFIELD,BLAKE STEENHOVEN
Published date01 May 2018
Date01 May 2018
DOI: 10.1111/1475-679X.12208
Journal of Accounting Research
Vol. 56 No. 2 May 2018
Printed in U.S.A.
No System Is Perfect:
Understanding How
Registration-Based Editorial
Processes Affect Reproducibility
and Investment in Research Quality
ROBERT BLOOMFIELD,
KRISTINA RENNEKAMP,
AND BLAKE STEENHOVEN
ABSTRACT
The papers in this volume were published through a Registration-based Edi-
torial Process (REP). Authors submitted proposals to gather and analyze data;
successful proposals were guaranteed publication as long as the authors lived
up to their commitments, regardless of whether results supported their pre-
dictions. To understand how REP differs from the Traditional Editorial Pro-
cess (TEP), we analyze the papers themselves; conference comments; a survey
of conference authors, reviewers, and attendees; and a survey of authors who
have successfully published under TEP. We find that REP increases up-front
investment in planning, data gathering, and analysis, but reduces follow-up
investment after results are known. This shift in investment makes individual
results more reproducible, but leaves articles less thorough and refined. REP
could be improved by encouraging selected forms of follow-up investment
Cornell SC Johnson College of Business.
Accepted by Christian Leuz. We are grateful for input from Sudipta Basu, Matt Bloom-
field, Christopher Chambers, Ryan Guggenmos, Christian Leuz (Editor), Robert Libby, par-
ticipants at Cornell University’s Behavioral Economics and Decision Research Showcase, au-
thors who provided feedback on our reading of their articles published in this volume,
and the hundreds of conference participants and accounting researchers who responded
to our surveys. An Online Appendix to this paper can be downloaded at https://research.
chicagobooth.edu/arc/journal-of-accounting-research/online-supplements.
313
Copyright C, University of Chicago on behalf of the Accounting Research Center,2018
314 R.BLOOMFIELD,K.RENNEKAMP,AND B.STEENHOVEN
that survey respondents believe are usually used under TEP to make papers
more informative, focused, and accurate at little risk of overstatement.
JEL codes: C18; I23; M40
Keywords: registered reports; reproducibility; editorial processes; research
discretion; peer review
1. Introduction
The Traditional Editorial Process (TEP) begins when authors submit
manuscripts reporting their conclusions from analyses they have already
conducted on data they have already gathered. The articles in this spe-
cial Conference issue, Registered Reports of Empirical Research, were published
through a Registration-based Editorial Process (REP), which begins when
authors submit proposals to gather and analyze data to test their predic-
tions. Authors submitted 71 proposals in total. After at least two rounds of
revision and resubmission, the eight proposals for the papers in this Con-
ference issue received an “in-principle” acceptance from editors, guaran-
teeing publication as long as authors gathered and analyzed their data as
promised, whether or not results supported their predictions.
How did registration affect the quality of the articles published in this
issue? How could registration provide a more useful complement to the
traditional process? We address these questions by examining the articles
themselves; comments from conference attendees; survey responses from
conference attendees, reviewers, and authors; and survey responses from
hundreds of accounting colleagues who have published papers through the
traditional process and share views on the challenges they faced.
Many who advocate for REP emphasize how it mitigates threats to the re-
producibility of p-values generated by Null Hypothesis Significance Testing
(NHST) (Nosek and Lakens [2014], Chambers et al. [2014]). TEP allows
authors to cherry-pick their analyses, rewrite their hypotheses after they see
their results, and engage in other questionable practices that overstate the
predictive power of the authors’ theory. TEP also biases the pool of pub-
lished research because editors are unlikely to select papers that do not
support predictions, and authors are unlikely to submit them.
REP mitigates questionable practices by requiring authors to gather and
analyze data as they planned, and mitigates selection biases by requiring
editors to make a publication decision before results are known. These
changes make results more reproducible, but also change how authors in-
vest in their projects. Under TEP,authors invest heavily in their studies after
they have observed their results. They continue to gather data, refine their
measures, adjust analyses to suit the distributions of their data, and rewrite
to communicate more effectively. REP allows authors to make some of these
follow-up investments, but provides little incentive for them to do so; by
the time results are known, editors have already accepted the paper for
publication. Authors are instead strongly motivated to make up-front in-
vestment in their proposal.
NO SYSTEM IS PERFECT 315
By our reading, the articles in this issue reflect REP’s increased empha-
sis on up-front investment. Authors devote more time and effort to data
gathering than seen in the typical paper published under TEP, proposing
larger sample sizes, more intricate hand-collection and measure construc-
tion, and more challenging experimental settings. The papers also report
weaker results than might typically be expected in published research. This
is due, at least in part, to REP’s de-emphasis of follow-up investment in
unplanned analyses and revisions that can overstate predictive power—
several papers’ results could have appeared stronger if authors had strate-
gically selected which hypotheses, subsamples, and analyses to highlight,
as they could have under TEP. But de-emphasizing follow-up investment
comes with a cost; most of the papers in this issue could have been more
thorough and focused if the authors had continued to invest in unplanned
analyses and rewriting.
Our survey of conference authors, reviewers, and attendees documents
the challenges posed by REP, and points to how they might be overcome.
Authors of accepted proposals found it difficult to plan studies to the level
of detail required for REP,while reviewers found it difficult to evaluate pro-
posals without the usual benefit of hindsight enjoyed under TEP, which
allows them to see which analyses “worked.” Many respondents believe that
REP will generate higher quality articles if authors obtain more outside
feedback before receiving in-principle acceptance, and if editors demand
high standards for up-front investment, encourage investment in rigor over
investment in scope or novelty, clarify the appropriate role of pilot data
gathering, and set looser but clearer limits on the revisions authors can
make after observing their results.
The net benefits of REP depend heavily on how authors actually use
their discretion during follow-up investment in unplanned data gather-
ing, analyses, and rewriting under TEP, and on how readers actually
interpret authors’ claims. To provide some empirical evidence on these
matters, we sent a survey to recently published authors in six well-regarded
accounting journals, soliciting their views and first-hand stories about how
various forms of discretion in unplanned data gathering, analysis, and
rewriting affect research quality. We received nearly 300 responses. Over-
all, respondents see discretion as improving the quality of published re-
search in accounting. However, some forms of discretion are viewed as
more harmful than others. Respondents are most concerned about over-
statement when authors use discretion over which measures and analyses
they report and highlight, and only slightly less concerned about how au-
thors use discretion to exclude entire subsamples (e.g., industries, time
periods). Respondents believe that authors tend to overstate their find-
ings when they change their theories and predictions after seeing re-
sults, but see revisions to hypotheses as largely an improvement in expo-
sition at little cost. Respondents also see substantial benefit and little cost
when authors use their discretion to gather more data or exclude unusual
observations.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT