Does the Final Score Truly Count? Performance Report Scorecards and the Government Performance and Results Act of 1993

AuthorTimothy P. Schmidle
Published date01 November 2012
Date01 November 2012
DOIhttp://doi.org/10.1111/j.1540-6210.2012.02658.x
926 Public Administration Review • November | December 2012
Does the Final Score Truly Count? Performance
Report Scorecards and the Government
Performance and Results Act of 1993
An Overview of the GPRA and the
Performance Report Scorecard Project
Under the GPRA, federal agencies must prepare stra-
tegic plans, performance plans with program-specif‌i c
goals and performance indicators, and performance
reports.  ese requirements are actually relatively minor,
notwithstanding the goals and rhetoric accompanying
the GPRA (Moynihan and Lavertu 2012). Government
Performance and Results: An Evaluation of GPRA’s First
Decade ref‌l ects its authors’ af‌f‌i liation with the Mercatus
Center at George Mason University, “the world’s pre-
mier university source for market-oriented ideas.”1 e
center undertook a 10-year, self-funded Performance
Report Scorecard project to assess the quality of GPRA-
mandated performance reports from 24 federal agencies
and independent departments.  e Scorecard did not
critique agency performance; rather, it “assessed the
quality of disclosure, not the quality of results” (xxiii).
Performance reports were rated on three topics:
transparency (“How easily can a non-specialist f‌i nd
and understand the report?”), public benef‌i ts (“How
well does the report document the outcomes the
agency produces for the public and compare them
with costs?”), and leadership (“How well does the
report demonstrate that agency managers use per-
formance information to make decisions?”) (xxiii–
xxiv). Potential scores on each topic’s four criteria
ranged from 1 (“no useful content”) to 5 (“potential
best practice”);2 thus, a report’s total score could be
anywhere from 12 to 60 points. Over the Scorecard’s
10-year period, 56 points was the highest score (for
the U.S. Department of Labor’s performance report
in f‌i scal year 2008) and 17 points the lowest (U.S.
Department of Defense, f‌i scal year 2007). An agency’s
ranking did not necessarily carry over to the following
year, as scoring standards were tightened over time
with the emergence of “new best practices.”
An Overview of the Book: Part I (GPRA and the
Quality of Performance Information)
Chapter 1 reports total scores and accompanying
rankings from the f‌i rst and last year (f‌i scal years 1999
Jerry Ellig, Maurice McTigue, and Henry Wray, Govern-
ment Performance and Results: An Evaluation of
GPRA’s First Decade, American Society for Public
Administration Series in Public Administration and
Public Policy (Boca Raton, FL: CRC Press, 2011).
321 pp. $59.95 (cloth), ISBN: 9781439844649.
Government Performance and Results: An
Evaluation of GPRA’s First Decade by Jerry
Ellig, Maurice McTigue, and Henry Wray
is an uneven work.  e book’s primary strength is
its critique of federal agencies’ performance reports
mandated by the federal Government Performance
and Results Act of 1993 (GPRA).  ereafter, the
content is a rather curious selection (particularly given
the book’s subtitle); the remaining material focuses
principally on topics that are, at best, tangential to
the GPRA. Furthermore, although the compilation of
government performance information rather than its
application receives the greatest attention (Moynihan
2009; Schick 2001), performance measurement’s
utility lies in its actual use (Metzenbaum 2009).  e
book’s recommendations for fostering greater use
of performance information are few in number and
narrowly targeted. As such, the monograph’s authors
provide little guidance as to how the performance
reports that they so ably critique can constitute more
than mere compliance with the GPRA’s reporting
requirements.
e intent of the book is indicated only in pass-
ing, with the authors’ observation that their book
constitutes an “attempt to answer” six questions:
“What factors make performance reports relevant and
informative?,” “Has the quality of information dis-
closed to the public improved?,” “Why do some agen-
cies produce better reports than others?,” “Has GPRA
led to greater availability and use of performance
information by federal managers?,” “Has GPRA led
to greater use of performance information in budget
decisions?,” and “What steps would make federal
management and budget decisions more performance
oriented?” (xix).
Sonia M. Ospina and Rogan Kersh, Editors
Timothy P. Schmidle
New York State Workers’ Compensation Board
Timothy P. Schmidle is a researcher
in the policy and research unit of the New
York State Workers’ Compensation Board.
He did his graduate work in labor relations
at Cornell University and in public adminis-
tration at Syracuse University.
E-mail: schmidletps@live.com
Public Administration Review,
Vol. 72, Iss. 6, pp. 926–934. © 2012 by
The American Society for Public Administration.
DOI: 10.111/j.1540-6210.2012.02658.x.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT