RNICE Model: Evaluating the Contribution of Replication Studies in Public Administration and Management Research

Published date01 July 2018
Date01 July 2018
DOIhttp://doi.org/10.1111/puar.12910
Public Administration Review,
Vol. 78, Iss. 4, pp. 606–612. © 2018 by
The American Society for Public Administration.
DOI: 10.1111/puar.12910.
606 Public Administration Review July | A ugus t 201 8
Research Article RNICE Model: Evaluating the Contribution of Replication
Studies in Public Administration and Management Research
Abstract: Replication studies relate to the scientific principle of replicability and serve the significant purpose of
providing supporting (or contradicting) evidence regarding the existence of a phenomenon. However, replication
has never been an integral part of public administration and management research. Recently, scholars have called
for more replication, but academic reflections on when replication adds substantive value to public administration
and management research are needed. This article presents the RNICE conceptual model, for assessing when and
how a replication study contributes knowledge about a social phenomenon and advances knowledge in the public
administration and management literatures. The RNICE model provides a vehicle for researchers who seek to evaluate
or demonstrate the value of a replication study systematically. The practical application of the model is illustrated using
two published replication studies.
Evidence for Practice
• Replication is the process of repeating previous research efforts with the aim of confirming or extending
previous findings and serves the important purpose of providing supporting (or contradicting) evidence
regarding the existence of a phenomenon.
• Academic reflections on when and how replication adds substantive value to public administration and
management research remain implicit and sparse.
• This article presents the RNICE conceptual model to guide both scholars (producers of information)
and public administration professionals (consumers of information) when evaluating the contributions of
replication studies.
Mogens Jin Pedersen
VIVE – The Danish Centre of Applied Social Science
Aarhus University
Justin M. Stritch
Arizona State University
Justin M. Stritch is assistant professor
at Arizona State University's School of
Public Affairs and senior research affiliate
at the Center for Organization Research
and Design. His research focuses on work
motivation, public management and
performance, employee decision making,
and organizational sustainability.
E-mail: jstritch@asu.edu
Mogens Jin Pedersen is senior
researcher at VIVE – The Danish Centre of
Applied Social Science and postdoctoral
researcher in the Department of Political
Science at Aarhus University. His research
focuses on public management and
performance, employee motivation and
decision making, cognitive biases, and
research methodology.
E-mail: mjp@vive.dk
Replicability is a key tenet of the scientific
method and positivist epistemological
approaches to research. As Epstein (1980,
796) notes, “[t]here is no more fundamental
requirement in science than the replicability of
findings be established.” The validity of scientific
claims depends on the extent to which analyses
are reproducible and analytical results are reliable
and generalizable to other situations and subjects
(Campbell and Stanley 1963). Replication—the
process of repeating previous research efforts with the
aim of confirming or extending previous findings—
speaks to the principle of replicability.
Replication serves the important purpose of providing
supporting (or contradicting) evidence regarding the
existence of a phenomenon (Collins 1992; Mackey
2012). In statistical terms, replication lowers the
probability of Type I or Type II error in the testing of
any null hypothesis (Cohen 1994; Phye, Robinson,
and Levin 2005; Robinson and Levin 1997) and
provides a control for systematic extraneous variables
possibly having confounded previous findings
(Krauth 2000; Schmidt 2009). Moreover, meta-
analysis is widely thought to be the platinum standard
of evidence (Stegenga 2011)—and replications of
previous research findings are a requirement for
conducting them (Makel and Plucker 2014).1 While
replication is not a panacea (Makel, Plucker, and
Hegarty 2012; Pashler and Wagenmakers 2012),
dismissing replication implies a value of novelty
over truth (Nosek, Spies, and Motyl 2012) and a
misconception of science. As Tukey (1969, 84) notes,
“[c]onfirmation comes from repetition. Any attempt
to avoid this statement leads to failure and more
probably to destruction.”
Replication studies play an integral role in the natural
sciences for testing the reliability and demonstrating
the generalizability of scientific findings (Madden,
Easley, and Dunn 1995). However, replication has
never been an integral part of public administration
and management research. As Walker, James, and
Brewer (2017, 1231) note, “Replication is not very
widespread in public management, partly because
replications are difficult to publish and faculty

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT