A Fistful of Data: Unpacking the Performance Predicament

Published date01 September 2018
Date01 September 2018
DOIhttp://doi.org/10.1111/puar.12989
A Fistful of Data: UnpackingthePerformancePredicament 665
Conceptualizing and measuring performance
in the public sector has often proven to be
a bridge too far. With performance-related
data more readily available to both scholars and
practitioners, the information age should be a time for
promise. Yet the elusiveness of objective performance
measures continues to dog the public administration
community. To be sure, the academy has made great
strides in providing scholars and practitioners with
a better understanding of performance (see, e.g.,
Andersen, Boesen, and Pedersen 2016). The articles
in this issue are a testimony to this effort, but also
demonstrate that more work is still needed. Indeed,
the performance predicament—complications that
arise in the measurement of performance—is an apt
description of the difficulties in articulating objective
indicators that accurately capture performance in
the public sector (see Garnett, Marlowe, and Pandey
2008; Pandey and Garnett 2006 for a discussion of
the difficulties in measuring interpersonal, external,
and internal communication and performance).
This predicament encompasses most if not all facets
of public organizations, including mission, culture,
budgeting, and human resources, to name a few.
Indeed, one of the great challenges of performance
management is deciding how much effort and
resources to devote to accountability efforts versus
mission-oriented activities. (We note that the
transition from performance measurement to
performance management blurs this distinction as
performance information comes to be used in making
daily program management decisions.) Another
challenge is navigating the convoluted trade-offs
between developing ideal indicators of a measure
but sacrificing comparability to other places. Ideally,
following performance management nomenclature,
we should set realistic goals based on an organization’s
mission; each goal should be represented by a set of
objective measures, which in turn are assessed with
one or more indicators. When we measure things
for comparison, we necessarily look to broader
definitions of performance; these standards are often
established by national or state agencies for reporting
by local agencies. Standard measures are valuable
for benchmarking, but in settling for measures that
work for all agencies, we miss the nuances of local
problems and stakeholder concerns that would
be articulated in goals drawn from local strategic
plans and the measures and indicators promulgated
therefrom. The “apples to apples” metaphor serves
as a goal for benchmarking efforts, but sadly the
associated measures may tell us only about the
number of apples or their average diameter, saying
nothing of their nutritional value, shelf life, or how
they taste in a pie. The accountability concern is that
what gets measured gets done. As a result, agency
responsiveness to something of great importance to
local stakeholders may be left unattended in favor of
effort that is captured in annual performance reports.
Over time we see the potential for mission creep and
even goal displacement, as agencies labor tirelessly to
demonstrate their responsiveness.
Complicating this predicament is the level at which
we define and measure performance, be it at the
individual level, team or group level, organizational,
and/or network level. Problems of aggregation
confound our best efforts to understand performance,
or how a particular strategy or approach compares to
others. Reports, in the interest of providing something
digestible to the information consumer, distill data
into summary tables and graphics. Agency-level
reports necessarily average low- and high-performing
units’ efforts across high- and low-performing time
periods, limiting the value of data for program
learning. In our last issue, we featured articles on
collaboration and coproduction. Organizations
need not have perfectly aligned goals to pursue
collaborative projects, and neither should their
performance measures be expected to align perfectly.
A unique case of policy advocacy several years ago saw
Baptist ministers and bourbon distillers join forces
to oppose mail-order beer delivery in a particular
state. Their missions and goals could be said to be
on opposite ends of the spectrum, but they found
R. Paul Battaglio
The University of Texas at Dallas
Jeremy L. Hall
University of Central Florida
A Fistful of Data:
Unpacking the Performance Predicament
Editorial
Public Administration Review,
Vol. 78, Iss. 5, pp. 665–668. © 2018 by
The American Society for Public Administration.
DOI: 10.1111/puar.12989.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT