Randomized Experiments and Reality of Public and Nonprofit Organizations: Understanding and Bridging the Gap

AuthorNicola Belle,Paola Cantarelli
Published date01 December 2018
Date01 December 2018
DOI10.1177/0734371X17697246
Subject MatterArticles
/tmp/tmp-17c03R2i0OvS6B/input 697246ROPXXX10.1177/0734371X17697246Review of Public Personnel AdministrationBelle and Cantarelli
research-article2017
Article
Review of Public Personnel Administration
2018, Vol. 38(4) 494 –511
Randomized Experiments
© The Author(s) 2017
Article reuse guidelines:
and Reality of Public and
sagepub.com/journals-permissions
https://doi.org/10.1177/0734371X17697246
DOI: 10.1177/0734371X17697246
journals.sagepub.com/home/rop
Nonprofit Organizations:
Understanding and Bridging
the Gap
Nicola Belle1 and Paola Cantarelli2
Abstract
This article aims to contribute to the methodological advancement of experimental
public administration, a nascent and promising literature stream. The article discusses
the assumptions behind the theory of experimentation and the consequences of their
violation; the main types of experimental designs (i.e., lab experiments, artefactual,
framed, and natural field experiments); discrete choice experiments, neglected so
far in our discipline for no good reason; the computation of optimal sample sizes;
and the procedures for dealing with noncompliance in field experiments. The article
concludes by providing tips to help public administration scholars bridge the gap
between randomized trials and reality.
Keywords
randomized control trials, experiments, experimental designs, external validity,
discrete choice experiment, optimal sample size, noncompliance
The qualities of bodies . . . which are found to belong to all bodies within the reach of our
experiments, are to be esteemed the universal qualities of all bodies whatsoever.
—Isaac Newton (1729/1687)
1Scuola Superiore Sant’Anna MHL Laboratory, Pisa, Italy
2Bocconi University, Milan, Italy
Corresponding Author:
Nicola Belle, Scuola Superiore Sant’Anna MHL Laboratory, Piazza Martiri della Libertà 33, Pisa 56127,
Italy.
Email: n.belle@santannapisa.it

Belle and Cantarelli
495
Introduction
Not too long ago, Brewer and Brewer (2011) could locate only five randomized stud-
ies published in public management journals during the previous 15 years. In an
impressively short time, randomized trials have become a boom industry, drawing
unprecedented attention and scholarship within our field (e.g., Anderson & Edwards,
2015; Bouwman & Grimmelikhuijsen, 2016; Jilke, Van de Walle, & Kim, 2016). The
rise of behavioral and experimental public administration (e.g., Blom-Hansen, Morton,
& Serritzlew, 2015; Bouwman & Grimmelikhuijsen, 2016; Grimmelikhuijsen, Jilke,
Olsen, & Tummers, 2016; James, Jilke, & Van Ryzin, 2017) has echoed trends in other
social sciences—such as organizational behavior (e.g., Grant & Wall, 2009) and eco-
nomics (e.g., List, Sadoff, & Wagner, 2011)—in which experiments have long achieved
prominence and guided the development of key theoretical principles.
Recent randomized trials in public administration journals range from the perfor-
mance and job effort effects of monetary incentives (e.g., Belle, 2015; Belle &
Cantarelli, 2015) to the performance effect of contact with beneficiaries in mission
driven organizations (e.g., Belle, 2013, 2014), to how cognitive biases influence per-
formance evaluations (e.g., Andersen & Hjortskov, 2016; Olsen, 2015) and the selec-
tion of service providers (e.g., Jilke, Van Ryzin, & Van de Walle, 2015). Indeed, the
systematic literature review of Bouwman and Grimmelikhuijsen (2016) identified
nine research topics that public administration scholars have investigated so far
through experimental designs. Parallel to the rise of substantive work in experimental
public administration, extant scholarship in our field also discusses randomized trials
as a research method. Recent methodological studies include discussions such as, for
instance, the contribution that laboratory experiments can make to the generation of
useful knowledge in public management (Anderson & Edwards, 2015); the degree to
which laboratory, survey, field, natural, and quasi-experiments can address the endo-
geneity issue that is common in public management research (e.g., Blom-Hansen
et al., 2015); the major challenges and limitations of conducting laboratory, field, and
survey experiments in our field (Baekgaard et al., 2015); and the main barriers for an
experimental public administration (Margetts, 2011).
This article aims to contribute to the advancement of behavioral and experimental
public administration by addressing some fundamental methodological challenges faced
by experimentalists in our field, with particular emphasis on the issue of external valid-
ity. Our goal is twofold. On one hand, we want to provide guidance on how to use field
experiments to bridge the gap between randomized trials and the reality of public and
nonprofit organizations. On the other hand, we want to help public administration schol-
ars to make their experiments more powerful and efficient. These two interrelated goals
are accomplished by taking stock of the still nascent experimental public administration
scholarship, as well as research from the other social sciences in which experiments have
long been given full consideration alongside other more established empirical method-
ologies. To the best of our knowledge, no previous attempts have been made to compre-
hensively tackle the foundational elements of the theory of experimentation and how
those translate into implementation challenges for scholars in our discipline.

496
Review of Public Personnel Administration 38(4)
This article proceeds as follows. We first will review fundamental theoretical
assumptions underlying experimentation in the social sciences and use illustrative
examples to show how violations of these assumptions can threaten the validity of
causal inferences. Although, to the best of our knowledge, a discussion of such assump-
tions is missing in our discipline, we are convinced that understanding them is crucial
for the development of a solid experimental public administration that produces usable
knowledge. Second, we will review the different types of experimental designs, tools,
and techniques scholars have at their disposal to gain both positive and normative
insights into contemporary public administration issues. In this section, we also will
introduce discrete choice experiments (DCEs), which, for no good reason, have been
mostly neglected in our field so far. Third, we will illustrate basic randomization tech-
niques and describe how to calculate optimal sample size and arrangement. Fourth, we
will discuss how to deal with noncompliance in randomized field experiments. We will
conclude with a discussion of concrete tips for future experimental research in the con-
text of public and nonprofit organizations. As a cautionary note, the “Theory of
Experimentation” and “Dealing with non-compliance in Field Experiments” sections
will be clearer to readers who have a basic understanding of experimental designs.
The Theory of Experimentation
Academics (e.g., Henshel, 1980) and practitioners (e.g., Hatry, Winnie, & Fisk, 1981)
alike have referred to randomized trials as the Cadillac of research design. Indeed,
randomized experiments are the most efficient tool that researchers and program eval-
uators have at their disposal to obtain an unbiased estimate of the average effect caused
by an intervention of some kind. In all other research designs, with only the exception
of regression discontinuity, what drives selection into conditions is either unknown or
measured with error (Shadish, Cook, & Campbell, 2002). The distinguishing element
that makes randomized experiments so powerful in supporting causal claims is the
random assignment of units to treatments. When subjects have an equal and nonzero
probability of being assigned to any experimental condition, the groups are probabilis-
tically similar to each other on average. As a consequence, differences in outcome
measures among the groups are due to the intervention.
The easiest way to grasp the logic behind random assignment is to imagine two
hypothetical states of the world: one in which unit i receives the treatment whose
effect we want to estimate and one in which the same unit i goes untreated. To estimate
the average treatment effect (ATE), we would need to observe how outcomes would
change on average if every unit were to go from untreated to treated. In other words, for
each unit, we would like to compare the treated potential outcome with the untreated
potential outcome. This is obviously impossible because the treated potential outcome
is only observable among units that receive the treatment and unobservable among
units in the control group. Likewise, the untreated potential outcome is only observable
among units in the control group and unobservable among units that receive the treat-
ment. The unobserved potential outcome remains imaginary, that is, counterfactual.
The expected difference between treated and untreated outcomes equals the ATE among

Belle and Cantarelli
497
the treated plus selection bias, which is the difference in average untreated potential
outcome between the treated and the untreated. Experimentalists use random assign-
ment to create two or more groups of units that are, in expectation, identical prior to
treatment. Under random assignment, treatment status is independent of the units’
potential outcomes and their background characteristics. Thanks to random assign-
ment, we are able to estimate the ATE by taking the difference between two sample
averages: the average outcome in the treatment...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT