Author:Sinden, Amy

I INTRODUCTION 74 II. BACKGROUND 81 A. Formal and Informal CBA 81 1. The Normative Grounding of Formal CBA in Welfare Economics 82 2. Quantification 84 a. Quantitative Risk Assessment 84 b. Monetization 86 c. A Typology of Unquantified Benefits 89 3. The Formality Spectrum 89 4. Standard-Setting vs. Litmus-Test CBA 91 B. Agencies' Legal Obligations Regarding Formal CBA 94 C. Previous Literature 100 III. METHODS AND RESULTS 105 A. The Data Set 106 B. Axis 1: Quantification 107 1. Significant Categories of Benefits Unquantified 107 2. Reasons for Lack of Quantification 109 C. Axis 3: Alternatives 110 D. Digging Deeper: The Story Behind the Numbers 111 1. The Outsized Role of Particulate Matter and the Undersized Role of Toxics 111 2. Missing Benefits: Ecological Effects 114 3. The Other Rules 114 IV. IMPLICATIONS: THE PROBLEM OF UNQUANTIFIED BENEFITS 118 A. The Constraints Imposed by Unquantified Benefits 118 Breakeven Analysis C. Implications 124 V. CONCLUSION 129 I. INTRODUCTION

[Cost-benefit analysis] minimizes decision costs through the magic of quantification. Once valuations are obtained from the marketplace and surveys... decisions are relatively automatic. - Jonathan Masur & Eric Posner (1) When important benefits and costs cannot be expressed in monetary units, [cost-benefit analysis] is less useful, and it can even be misleading, because the calculation of net benefits in such cases does not provide a full evaluation of all relevant benefits and costs. - Office of Management and Budget (2) It's a simple idea. Before issuing regulations, the government should first add up all the social costs and the social benefits and compare them. (3) But the devil is in the details. Drawing meaningful conclusions from a comparison of costs and benefits is difficult--and sometimes maybe impossible--unless you can quantify both sides in a common metric. If costs are measured in dollars, then the best way to accomplish a comparison is to measure the benefits in dollars as well.

And there's the rub. While regulatory costs tend to involve values that are relatively easy to measure and express in monetary terms--the cost of installing a scrubber on a smokestack, for example--regulatory benefits tend to involve things that are hard to quantify, and even harder to monetize. (4) They include things like effects on human health, premature death, degradation of ecosystems, extinction of species, and so on. And if costs are completely (or relatively completely) monetized, but benefits only partially so, then drawing any meaningful conclusion from a comparison becomes problematic.

This is hardly a new insight. Indeed, most of the criticisms raised by those who are skeptical of cost-benefit analysis (CBA) in agency rulemaking relate in some way to the difficulties posed by the quantification and monetization of regulatory benefits. The list of reasons that benefits may be left unquantified or under-counted in CBA is long. And many of these reasons implicate deep theoretical and normative issues that have spawned an extensive literature over many decades. (5) But the difficulties posed by quantification also raise a straightforward empirical question that has been largely ignored: how often and to what extent does the problem of unquantified benefits actually arise in practice? (6)

Asking that empirical question also brings into focus a more prosaic problem that is frequently mentioned but rarely analyzed in any depth--the problem of insufficient data. (7) Putting aside the perhaps more intellectually exciting problems of incommensurability, endowment effects, wealth effects, discount rates, and so on, benefits are sometimes (perhaps quite often) left unquantified and under-quantified in CBA for the simple reason that the relevant data don't exist. (8)

CBA skeptics almost always list the missing data problem in an initial catalogue of CBA's shortcomings, but then usually move on to tackle deeper theoretical issues. Proponents of CBA often acknowledge the problem also, but then shrug and move on as though it doesn't really matter, or is, perhaps, of trivial enough magnitude to be safely ignored. But when we tackle the empirical question of the frequency and magnitude of unquantified benefits in the real world--as I did in the original empirical study presented below--it turns out that the missing data problem looms large and, as I argue, calls into question not just the practice of CBA but the intellectual foundations on which it rests.

All of this matters, particularly now. The Trump Administration has declared war on the regulatory state. (9) A series of executive orders have promised to reduce regulatory burdens, and the President has pledged to undo a litany of Obama-era regulations aimed at protecting public safety and the environment--rules on climate change, (10) highway safety, (11) worker protections, (12) wetlands preservation, (13) and a host of other pressing issues. (14)

In this war, CBA will play a central role, (15) as it has since an earlier icon of anti-regulatory zeal, President Ronald Reagan, first imposed a CBA requirement on federal agencies nearly four decades ago. (16) Since then, CBA has been embraced by both Democratic and Republican administrations, but in academic and policy circles, it continues to spark fierce debate: is it a valuable technocratic tool that harnesses "the magic of quantification" to meaningfully evaluate the quality and desirability of regulations, or a smokescreen that cloaks a garbage-in-garbage-out analysis in a veneer of scientific objectivity? Tackling the question of unquantified benefits empirically, it turns out, begins to shed new light on these questions.

So how big is the problem of unquantified benefits? Anecdotal evidence suggests that it may be significant. (17) Case studies of individual CBAs show large and significant aspects of benefits that are left uncounted. Cass Sunstein, for example, found that in its CBA on the regulation of arsenic in drinking water, the United States Environmental Protection Agency (EPA) left unquantified the effects of five of the seven different kinds of cancer associated with arsenic, along with a host of other health effects, including "pulmonary, cardiovascular, immunological, neurological, and endocrine effects." (18) The CBA accompanying EPA's 2011 mercury and air toxics rule for power plants monetized only one narrow human health endpoint: IQ losses suffered by children exposed to mercury in utero when their mothers ate fish caught recreationally in U.S. waters. (19) It thus excluded the vast bulk of exposures to pregnant women--all exposures from commercially caught fish and from fish caught in non-U.S. waters. (20) It also left out numerous other impacts, including IQ losses in other populations, other neurological effects, potential cardiovascular, genotoxic, and immunotoxic effects, all ecological effects, and all other toxics besides mercury. (21) Similarly, EPA's CBA of its rule governing cooling water intakes at power plants was roundly criticized for leaving entirely unquantified the aquatic ecosystem benefits of the rule, and for leaving out all but two percent of the fish populations it did try to count. (22)

Although these case studies and anecdotal accounts are important, this Article tackles the question of unquantified benefits more systematically through an empirical study of a set of forty-five CBAs conducted by EPA over a recent thirteen-year period. I chose to focus on EPA because it is the agency that is usually held up as the gold standard for agency conduct of CBA. (23) My data set included the CBAs conducted by EPA in connection with each of the major rules (primarily those with effects on the economy of $100 million or more) issued between 2002 and 2015. (24)

While this empirical project has embedded within it a paradox--it seeks to measure what the agency has deemed immeasurable--I was nonetheless able to uncover some evidence as to the magnitude of the benefits left unquantified in these CBAs. In thirty-six out of the forty-five CBAs I analyzed (80%), EPA described as "important," "significant," or "substantial" categories of benefits that the agency excluded as unquantifiable due to data limitations. (25)

Indeed, in certain instances, the monetized benefits estimate left out the value of ameliorating the very harm at which the rule itself was aimed. Thirteen of the rules had the explicit purpose of reducing emissions of hazardous air pollutants and yet the CBAs failed to monetize the value of reducing those pollutants at all. (26) Virtually all of the monetized benefits came instead from the salutary fact that emissions controls aimed at reducing hazardous air pollutants also happen to produce the ancillary benefit of reducing a different pollutant: particulate matter. (27)

While admittedly preliminary, this data suggest that the problem of unquantified benefits is a big one that deserves more attention than it has received. One consequence of significant benefits remaining unquantified, for example, is that it becomes impossible for the agency to perform formal CBA of the sort called for in the executive orders and guidance memos governing agency use of CBA. Rather than identifying the efficient level of regulation, the analyst can draw only limited conclusions. Accordingly, these results suggest that formal CBA is even further unmoored from its foundations in welfare economics and Kaldor-Hicks efficiency than most of its defenders have assumed. (28)

For environmental regulation, there are other standards--feasibility and health-based standards, in particular--with long track records in agency practice that don't require comprehensive monetization of regulatory benefits. (29) These standards have been criticized for being insufficiently grounded in efficiency and welfare economics. (30) But if CBA's own grounding in efficiency is itself called into question, then it no longer has that leg-up...

To continue reading