Although Performance-based budgeting (PBB) systems have yet to be incorporated into budget processes, many scholars and practitioners expect them to be more widely used in governmental budgeting decisions for the coming decades (Joyce and Pattison 2010). In the national government, scholars and practitioners have observed that the Government Performance and Results Act (GPRA), the President's Management Agenda (PMA), and the Program Assessment Rating Tool (PART) do not successfully link performance evaluation results to actual budget outcomes. However, Joyce (2011) contends that those efforts paved the way for more recent reforms by providing us with lessons, developing the capacity to use them, and heightening the probability of successful implementation of future similar reforms. Hou et al. (2011) report similar observations with state PBB initiatives. For some agencies in the Washington state government, PBB systems contributed to a better and clearer understanding of agency budgetary needs.
Some recent empirical studies support the assertion that PBB systems improved budget processes by enhancing productive efficiency (e.g., the ratio of outputs to inputs). Reddick (2003) surveyed how PBB systems impact state functional expenditures. His findings show that PBB systems somewhat constrained selective functional expenditures and state aggregate expenditures. In a similar vein, Barzelay and Thompson (2006) reported that responsibility budgeting, implemented by the Air Force Material Command (AFMC), significantly controlled its program costs.
Alongside the somewhat promising evidence of how PBB improves productive efficiency, another key question has garnered relatively weaker attention; allocative efficiency. A key aim in various PBB initiatives is to transfer resources to programs where their benefits can be maximized (Melkers and Willoughby 1998), a concept known as allocative efficiency (Robinson and Brumby 2005, 5-13). Ho (2011) shows that for a set appropriation amount, budget decision makers at the "sub-departmental" level might be able to make cost-efficient budget allocations across different programs. Ho indicates that PBB systems are operative at the sub-departmental level in a local government where "public managers conduct their daily work and where strategic planning, performance goal setting, and program budgeting are logically linked" (398). He also contends that street-level bureaucrats or program-level staff have direct contact with clients and have better ideas on innovative approaches to deliver public services. The PBB systems in local government encouraged heavier involvement by street-level bureaucrats, thereby to contribute to enhancing the cost-effectiveness of city programs. Ho's study is one of the rare empirical studies on allocative efficiency of PBB systems.
Despite Ho's (2011) promising study at the local level, virtually no study has investigated whether PBB systems improve allocative efficiency in state and national governments where budget decision makers' capacity is heavily limited by the sheer amount of budget-related information and the prescribed political and legal institutions. In particular, the national government has been implementing various forms of PBB since the early 1990s. This paper attempts to fill the gap in the literature. By using GPRA performance evaluation scores and budget data for the U. S. Department of Commerce (DOC), this paper empirically tests whether performance evaluation scores are incorporated into agency budgets. The findings will be a meaningful addition to similar, previous studies based on either anecdotal cases or self-reported surveys.
THE GOVERNMENT PERFORMANCE AND RESULTS ACT (GPRA) OF 1993 AND ALLOCATIVE EFFICIENCY
As White (2012) indicates, there are two ways of pursuing efficiency in budget analysis. One, known as "productive" efficiency, is to measure the ratio of outputs to inputs and has been the object of traditional budget analysis. Another way, called "budgeting for results," measures the performance of different programs to allocate resources among them. This concept is close to the definition of allocative efficiency introduced above, and in the next section.
Allocative efficiency is more difficult to achieve, due to the difficulty of comparing performance measures across different programs, the lack of expertise, and the high probability of being contested as invoking value issues. At the federal level, GPRA and PART are the latest attempts at PBB for budget decisions but GPRA is more heavily focused on results than PART (Joyce 2011; Ellig, McTigue, and Wray 2011, 203-213), making it a good target for this paper.
According to Government Accountability Office (GAO) surveys conducted between 1994 and 2007, federal managers had several types of performance measures (e.g., outcome, output, efficiency, customer satisfaction, and quality). GPRA initiatives were correlated with relatively more passive forms of performance use such as refining performance measures and program goals. In contrast, their actual use had just slightly increased (e.g., for resource allocation, priority setting, adopting new approaches, etc.). GPRA initiatives were not correlated with this more purposeful use of performance information: Performance data were not used for actual program, resource, or employment management (Ellig, McTigue, and Wray 2011, 192-195; Moynihan and Laver tu 2012).
More specifically, in terms of allocative efficiency from GPRA initiatives, only a few studies exist and their evidence is anecdotal and bifurcated at best. The GAO surveys identified a couple of ways in which GPRA initiatives can improve allocative efficiency in agency budget preparation. For instance, the Small Business Administration (SBA) used performance data to terminate a program that attempted to provide more computers and Internet access to entrepreneurs because the services had already become widely available. Some other agencies had successfully measured outcomes and used the performance information to make programmatic decisions (Ellig, McTigue, and Wray 2011, 188-191).
Cost-effectiveness measures gauge the cost of delivering one unit of outcome or result. As emphasized above, GPRA initiatives were supposed to shift resources from less cost-effective to more cost-effective programs. A study on the cost-effectiveness of job training programs conducted the year after federal agencies issued their first GPRA evaluation reports indicate, however, that the expected resource reallocation did not take place. Three programs, Youth Transition, Welfare to Work, and Job Corps, implemented by the Department of Labor, each received appropriations of more than one billion dollars in FY 1999. Job placements per million dollars for the three programs, which are a typical example of outcome or result, were fewer than 60. In contrast, School to Work (Education) and Veterans in Need (Labor) received slightly over ten million dollars for their FY 1999 appropriations although the placements per million dollars were over 1,700. Obviously, there is no match between result-based performance evaluation scores and budget allocations. (Ellig, McTigue, and Wray 2011, 178-180). In this case, there is even a reverse relationship between them.
MEASURES OF ALLOCATIVE EFFICIENCY
Budget scholars have long indicated how hard it is to achieve allocative efficiency especially from public programs that do not carry price tags (Key 1940; Lewis 1952). The difficulty repeats itself in this paper. Allocative, or Pareto, efficiency will be maximized when the marginal social cost incurred from delivering a certain program equals its marginal social benefit which reflects consumers' preferences and tastes for it. Economic resources will be consumed to produce the program until its marginal cost equals its marginal benefit (Hyman 1986, 316-318, 2011, 58-62). This condition also means that the social surplus (= consumer surplus + producer surplus) from the program will be maximized (Boardman et al. 2011, 59-61). One lingering question is how to measure marginal benefits from public programs without any price tags. A related issue is how to locate the efficiency-maximizing quantities of multiple public programs with potentially different benefit measures. Key (1940) and Lewis (1952) clearly indicated this fundamental challenge in applying the concept of allocative efficiency to multiple "public" programs.
In other words, it is difficult to assume that performance evaluation scores accurately measure marginal benefits from multiple public programs and so are linearly linked to budget resources (e.g., dollars), if not economic resources. However, one thing is clear. If there was resource reallocation to maximize allocative efficiency, one should be able to observe fluctuation in the budget share of the public program at a minimum. An early study presented a still-useful measure to show how dollars move around different programs. Table 1 explains the measure Natchez and Bupp (1973) developed.
Assume that there are three programs, A, B, and C, operated by an imaginary agency for three fiscal years. The third column in Table 1, Budget, shows each program's budget in a certain year. The Total column is the sum of all agency programs for each fiscal year, respectively. For instance, $450 is the sum of all three programs in fiscal year 1. The column, Program Proportion, measures each program's relative share of the total in each year. As an example, the program proportion of program A in year 1 is 0.22 (=100 / 450). Because the program proportion might show extreme fluctuations across years, Natchez and Bupp (1973) suggested further standardizing to the program proportion across years. For example, the Mean Program Proportion column estimates the mean value of the program portion across all three years. The mean value of the program proportion for program C will be computed as...
Performance-based budgeting (PBB), allocative efficiency, and budget changes: the case of the U. S. Department of Commerce.
|Author:||Ryu, Jay Eungha|
To continue readingFREE SIGN UP
COPYRIGHT TV Trade Media, Inc.
COPYRIGHT GALE, Cengage Learning. All rights reserved.
COPYRIGHT GALE, Cengage Learning. All rights reserved.