The Design and Practice of Integrating Evidence: Connecting Performance Management with Program Evaluation
DOI | http://doi.org/10.1111/puar.12865 |
Published date | 01 March 2018 |
Date | 01 March 2018 |
The Design and Practice of Integrating Evidence: Connecting Performance Management with Program Evaluation 183
Public Administration Review,
Vol. 78, Iss. 2, pp. 183–194. © 2017 by
The American Society for Public Administration.
DOI: 10.1111/puar.12865.
Donald P. Moynihan is director of
the La Follette School of Public Affairs and
Epstein and Kellett Professor of Public
Affairs at the University of Wisconsin–
Madison. He is author, most recently, of
Toward Next-Generation Performance
Budgeting,
published by the World Bank
in 2016. He was the 2014 recipient of the
Kershaw Award, given every two years to a
scholar under the age of 40 for outstanding
contributions to the study of public policy
and management.
E-mail: dmoynihan@lafollette.wisc.edu
The Design and Practice of Integrating Evidence:
Connecting Performance Management with
Program Evaluation
Alexander Kroll is assistant professor
of public administration in the Steven J.
Green School of International and Public
Affairs at Florida International University.
His research interests are the management
of government organizations, the functions
and dysfunctions of performance systems,
and the role of organizational behavior in
improving public services. He has received
awards from the American Society for
Public Administration and the Academy of
Management.
E-mail: akroll@fiu.edu
Abstract : In recent decades, governments have invested in the creation of two forms of knowledge production
about government performance : program evaluations and performance management. Prior research has noted
tensions between these two approaches and the potential for complementarities when they are aligned. This article
offers empirical evidence on how program evaluations connect with performance management in the U.S. federal
government in 2000 and 2013. In the later time period, there is an interactive effect between the two approaches,
which, the authors argue, reflects deliberate efforts by the George W. Bush and Barack Obama administrations to
build closer connections between program evaluation and performance management. Drawing on the 2013 data,
the authors offer evidence that how evaluations are implemented matters and that evaluations facilitate performance
information use by reducing the causal uncertainty that managers face as they try to make sense of what performance
data mean .
Evidence for Practice
• Program evaluation and performance management produce different knowledge and are more effective when
they are complementary.
• Managers involved in program evaluations are more likely to use performance data because program
evaluations make causal inference easier, establishing synergies for performance information use.
• The connections between program evaluation and performance management do not arise naturally; rather,
they require a mixture of continuity and design.
A fundamental philosophical value of the study
of governance is that structural conditions
can be designed to improve governmental and
therefore societal outcomes. This design value is so
widely accepted that it saturates the language of public
administration. Consider, for example, the persistent
tendency by the advocates of any sort of change to
frame their ideas as “reform,” implying the need and
ability to engage in a corrective improvement.
The promise and perils of design also suffuse basic
tensions in the study of public administration
(Levine et al. 1975). Practitioners who criticize the
limited real-world relevance of scholarship are often
frustrated by the lack of clear design prescriptions:
what should we do? The long-running debate about
the role of rationality often centers on barriers
that undercut good design, such as the role of
politics (Moe 1989 ), the failure to account for local
conditions (Lindblom 1959 ), or the reliance on
simplistic assumptions of human behavior and the
functioning of administrative instruments (Andrews,
Pritchett, and Woolcock 2013 ; Heinrich and
Marschke 2010 ).
Another, less frequently considered, design issue
is how the different components of government
fit together. This issue is most often expressed in
concerns about how structural silos of resources and
expertise constrain the solving of “wicked problems”
that demand more collaborative designs both inside
and outside government (Emerson, Nabatchi,
and Balogh 2012 ; Skelcher, Mathur, and Smith
2005 ). A variation of this problem is how different
administrative policies or techniques fit together.
Light ( 1998 ) narrates how a variety of government
changes adopted in the United States over 50 years
piled atop one another. Each could be justified in
isolation, but the changes were not designed to work
together as part of a coherent framework.
In this article, we identify a related and common
design problem that is informed by these prior
critiques: the coordination of different forms
of knowledge production about government
performance. In particular, we examine how program
evaluation and performance management relate to
one another, using the U.S. federal government as
a case study. To the casual observer, these initiatives
Alexander Kroll
Florida International University
Donald P. Moynihan
University of Wisconsin– Madison
To continue reading
Request your trial