Doing More with More: How “Early” Evidence Can Inform Public Policies

AuthorErika Fulmer,Colleen Barbero,Sharada Shantharam,Siobhan Gilchrist,Michael W. Schooley
DOIhttp://doi.org/10.1111/puar.12831
Published date01 September 2017
Date01 September 2017
646 Public Administration Review • September | October 2017
Public Administration Review,
Vol. 77, Iss. 5, pp. 646–649. © 2017 by
The American Society for Public Administration.
DOI: 10.1111/puar.12831.
Doing More with More:
How “Early” Evidence Can Inform Public Policies
Sharada Shantharam works in the Applied
Research and Evaluation Branch, Division for Heart
Disease and Stroke Prevention, CDC, Atlanta,
Georgia. Most of her work falls under the umbrella
of policy research, specifically, early evidence
assessments and policy implementation studies.
The other side of her work is centered on applied
research and translation as related to chronic
disease, heart disease, and stroke prevention.
Sharada received her master s degree in public
health from Washington University in St. Louis.
E-mail: ktq4@cdc.gov
Siobhan Gilchrist has conducted policy research
and analysis in the Applied Research and Evaluation
Branch, Division for Heart Disease and Stroke
Prevention, CDC, Atlanta, Georgia, since 2009. She
earned a law degree in 2006 after leading projects
as an epidemiologist in local, state, national, and
international public health agencies to advance
science-based programs and policies that lead to
positive public health outcomes.
E-mail: smg0@cdc.gov
Colleen Barbero is a health scientist in the
Applied Research and Evaluation Branch, Division
for Heart Disease and Stroke Prevention, Centers
for Disease Control and Prevention (CDC), Atlanta,
Georgia. She is a trained policy scientist and program
evaluator. Since 2011, she has led the development
of an approach to assess the best available evidence
for public policies, working with diverse partners
representing academic, practitioner, and government
perspectives.
E-mail: vrm5@cdc.gov
Kimberley R. Isett, Brian W. Head, and Gary VanLandingham, Editors
Colleen Barbero
Siobhan Gilchrist
Sharada Shantharam
Erika Fulmer
Michael W. Schooley
Centers for Disease Control and Prevention
Erika Fulmer is a policy analyst in the Applied
Research and Evaluation Branch, Division for Heart
Disease and Stroke Prevention, CDC, Atlanta,
Georgia. Specializing in policy evaluation and applied
research translation, she has worked at the national,
state, and local levels. Her 20 years of public health
work include designing and implementing strategic
performance measurement systems, planning
national evaluations, providing translation and
evaluation technical assistance, and participating
in the development and implementation of policy
tracking systems.
E-mail: duj2@cdc.gov
Michael W. Schooley is chief of the Applied
Research and Evaluation Branch, Division for Heart
Disease and Stroke Prevention, CDC, Atlanta,
Georgia. His public health career has focused
on applied research, with particular emphasis
on monitoring and evaluating chronic disease
prevention and control programs and policies.
Recently, his efforts have focused on advancing
pragmatic approaches to assess and build evidence
for public health programs and policies.
E-mail: mschooley@cdc.gov
C alls for government-funded activities to
be “evidence based” are ubiquitous. “Gold
standard” studies, including randomized
controlled trials and systematic reviews (Isett, Head,
and VanLandingham 2016 ), have expanded the
availability of evidence-based programs and practices
(VanLandingham and Silloway 2016 ). However,
because of their complexity, large-scale policies
(comprising services, laws, rules, and regulations
implemented at the population level) are more
difficult to study experimentally, resulting in evidence
gaps.
Public policies should be informed by the best
information available. This article focuses on the
utility of early evidence assessment and provides
an example of one approach called the Quality and
Impact of Component (QuIC) Evidence Assessment.
This approach provides a systematic and timely
method for policy analysis that can be applied
to many types of emerging and complex public
policies.
Gaps in the Evidence to Inform Policy
Public policies rarely have the same kind of rigorous,
“gold standard” evidence as the pilot interventions
they intend to scale up. Policies are often complex,
containing multiple components, and implemented
on multiple levels over time, so extracting component-
specific effects proves difficult. Other challenges to
conducting rigorous policy impact studies include sparse
data (Isett, Head, and VanLandingham 2016 ), poorly
matched communities for controlled comparisons, and
complex, macro-level forces (Walt et al. 2008 ).
The complexities that accompany major public
policies inevitably create gaps in empirical evidence.
Evidence gaps have been used to justify inaction, but
inaction is unacceptable for practitioners given the
urgency and/or magnitude of many public problems.
A systematic approach to assessing all available
evidence can inform policy; this article describes how
applying such an approach can help fill gaps.
Addressing Gaps with Early Evidence
Assessment
Public decision makers need to know which policies
are feasible and most likely to achieve the desired
impact. When the evidence base for a policy does not
include impact studies, an approach for assessing all
relevant, available evidence is needed to inform policy
decisions in the short term and research studies in the
longer term. Figure 1 provides a process for policy
research beginning with early evidence assessment.
Early evidence assessment is completed simultaneously
with policy surveillance, which is the systematic
collection, analysis, and interpretation of policies.
When policy implementation, rating, and impact
studies become feasible, they may validate the results of
early evidence assessment. Once enough high-quality
policy impact studies are complete, a systematic review
can provide an authoritative recommendation for
policy dissemination and implementation.
Integrating many types of evidence from diverse
sources during early evidence assessment provides
the breadth of information that is needed for policy
decisions (Bowen and Zwi 2005 ). For example,
practice-based knowledge, including professional
judgment and an understanding of context, can be
gleaned from policy statements, guidelines, and briefs
by professional and nonprofit organizations; formative
and process evaluation studies; and journal articles.
This knowledge can help build theory. Additionally,
studies examining outcomes of discrete interventions,
alone or in combination with other interventions, can
inform whether policies applying similar approaches
could be scaled to the population level.
Early Evidence Assessment with QuIC
A challenge in expanding the evidence base for policy
decisions is that existing methods for assessing diverse
evidence, such as integrative literature review, lack
rigor (Whittemore and Knafl 2005 ). QuIC applies
a systematic assessment to the early evidence aligned
with individual components of a multicomponent
Evidence in Public
Administration

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT