Bringing Rigor to the Use of Evidence in Policy Making: Translating Early Evidence

AuthorJ. Taylor Scott,Daniel Max Crowley
DOIhttp://doi.org/10.1111/puar.12830
Published date01 September 2017
Date01 September 2017
650 Public Administration Review • September | October 2017
Public Administration Review,
Vol. 77, Iss. 5, pp. 650–655. © 2017 by
The American Society for Public Administration.
DOI: 10.1111/puar.12830.
Bringing Rigor to the Use of Evidence in Policy Making:
Translating Early Evidence
J. Taylor Scott is research assistant
professor in the Edna Bennett Pierce
Prevention Research Center at Pennsylvania
State University and project coordinator
for the Research to Policy Collaboration.
Her work seeks to strengthen the use
of empirical evidence by policy makers
and inform policy-relevant research by
facilitating connections between researchers
and policy makers. This includes evaluating
and strengthening models for translating
science into policy.
E-mail: jxs1622@psu.edu
Daniel Max Crowley is assistant
professor of human development and family
studies at Pennsylvania State University and
directs the Prevention Economics Planning
and Research Program in the Edna Bennett
Pierce Prevention Research Center. His
research focuses on strengthening methods
for conducting benefit–cost analyses
of preventive interventions, leveraging
administrative data to understand the
fiscal impact of prevention, and facilitating
evidence-based policy making through
strategic investments in preventive services.
E-mail: dmc397@psu.edu .
Abstract: Beyond the evidence provided by randomized controlled trials, there is a need for supplementing and
contextualizing efficacy findings through early evidence. This may include evidence of program costs, quality
implementation processes, and impact of programs on different groups. This article considers the Quality and Impact
of Component Evidence Assessment and other exemplary efforts for translating early evidence for policy making within
a common framework. This framework includes processes for strategic review, development of guiding standards on the
quality of evidence, and active communication with policy makers.
T he success of the evidence-based policy
movement, a persistently tight funding
climate, and increased research accessibility
via digital communications have all facilitated an
exponential rise in the amount of information
available about strategies to address public problems
(Haskins and Margolis 2015 ). This has resulted in a
pressing need for approaches that synthesize bodies
of evidence to provide actionable insights for policy
makers (Brownson 2011 ; Results for America 2015 ).
Formal processes, such as the Quality and Impact of
Component (QuIC) Evidence Assessment model,
offer a method for bringing rigor to such translational
efforts (Barbero et al. 2015 ). In particular, there is a
need for such approaches to consider types of evidence
beyond what is produced by the increasingly esteemed
randomized controlled trial (RCT) (Cartwright 2007 ;
Haskins and Baron 2011 ; Haskins and Margolis 2015 ).
Beyond Evidence from Randomized
Controlled Trials
Increasingly, efficacy estimates produced by RCTs are
seen as the most important, most mature, or “gold
standard” evidence for policy making (Kaptchuk
2001 ). Indeed, there are many cases in which a
new policy should not be implemented without
completion of an RCT. For instance, clinical trial
procedures should be followed for drug development
(Tunis, Stryer, and Clancy 2003 ). Further, behavioral
interventions should be tested for harmful effects
using a randomized comparison group (Concato,
Shah, and Horwitz 2000 ). Failing to do so can result
in policies that cause more harm than good (Head
2008 ; Isett, Head, and VanLandingham 2015 ).
Unfortunately, the cost of conducting high-quality
RCTs remains a pragmatic barrier and can at times
be impractical, unethical, or politically infeasible
(Coalition for Evidence-Based Policy, 2012 ; Collins,
Murphy, and Strecher 2007 ). Efforts by various
groups, including the Coalition for Evidence-Based
Policy, Pennsylvania State University’s Methodology
Center, and the Laura and John Arnold Foundation,
are actively seeking to develop approaches to reducing
the costs of experimental trials (Collins, Murphy,
and Strecher 2007 ; Coalition for Evidence-Based
Policy 2014 ). In particular, the Arnold Foundation’s
portfolio of low-cost RCTs has led to numerous
efforts in this area. Yet even with more affordable
methods, there are other ethical, logistical, or political
issues that limit deploying RCTs (Concato, Shah,
and Horwitz 2000 ). While RCTs are essential in
some contexts, there are many circumstances in
which efficacy findings from an RCT alone are not
sufficient for informing policy. For instance, efficacy
findings are only intermittently accompanied by key
information about the resources needed to successfully
implement a program at scale (Crowley et al. 2012 ;
Gottfredson et al. 2015 ). Further, efficacy findings
derived from tightly controlled studies may not
provide necessary information about how to ensure
adherence to intervention components when delivered
outside an experimental context (Carroll et al. 2007 ).
Additional types of evidence are needed to
supplement and contextualize efficacy findings
(Heinrich 2007 ). Essential early evidence that extends
beyond RCTs include estimates of program costs,
process evidence around maintaining implementation
quality, and nonexperimental assessment of subgroups
effects (Crowley, Coffman et al. 2014 ; Steuerle and
Jackson 2016 ). While a policy should not be enacted
based on a single finding, there are many instances
Kimberley R. Isett, Brian W. Head, and Gary VanLandingham, Editors
Daniel Max Crowley
J. Taylor Scott
Pennsylvania State University
Evidence in Public
Administration

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT