Estimating empirical Blackstone ratios in two settings: murder cases and hiring.

AuthorBushway, Shawn D.

ABSTRACT

There is a growing awareness in the legal literature of the need to estimate the prevalence of errors that exist within the criminal justice system. A majority of the time, the focus is on the false positive, or wrongful conviction, rate. Yet, a complete picture of the decision process requires estimates of both false positives and false negatives. In this paper, I generate an estimate of the false negative rate for a representative sample of murders in Chicago. I also estimate the cost ratio of false negatives to false positives that would be needed to justify using records of incarceration to identify people at risk in the Chicago metropolitan area. Both estimates should shed meaningful light on the growing debate about what rules should be set to achieve more socially optimal decisions in both the criminal justice system and the labor market. Future work should focus on replicating and extending these preliminary estimates.

  1. INTRODUCTION

    Some decisions involve a choice between two options. In the case of the trial, the jury is trying to decide if a person is guilty or innocent, starting from the null hypothesis that the person is innocent. In the case of employment, an employer is concerned about hiring a risky employee who will harm fellow employees or clients. In this simplest kind of decision framework, there are two kinds of errors. False positives are innocent or not risky people who are identified as guilty/risky. (1) False negatives are guilty/risky people who were not identified as guilty/risky. (2)

    The Blackstone ratio on which this special issue is based makes it clear that policymakers can specify the nature of the tradeoff between these error rates. Specifically, in the context of conviction, Blackstone hypothesizes that it would be ideal to have a justice system that generates ten false negatives for every false positive. (3) There has been considerable subsequent debate about the relative desirability of a conviction of an innocent man versus allowing a guilty man to go free, or whether such a tradeoff is even morally acceptable. (4) A detailed review of the literature by Alexander Volokh found that the most commonly accepted standard in the U.S. is that ten guilty men should go free before one innocent man should be found guilty. (5) Volokh also found states that advocate for a one-to-one standard, as well as one state, Oklahoma, with a standard of one false positive for every one hundred false negatives. (6)

    I am not aware of a systematic effort to describe the false negative rate in the U.S. criminal justice system using archival data, but Shawn Bushway and Brian Forst provide a back-of-the-envelope estimate based on aggregate data of 1500 to 3000 false negatives to each false positive in the U.S. criminal justice system. (7) In this paper, I use existing data to determine a rudimentary estimate of the number of false negatives in murder investigations in Chicago in 1979. I find an empirical Blackstone ratio of sixty-one false negatives for every false positive, with a lower bound of thirty-five-to-one.

    Employers and others who use criminal history records are more concerned about false negatives than false positives; that is, they want to avoid hiring risky people who will harm someone while working. As a result, employers are plausibly willing to tolerate a certain number of false positives for every false negative. In the extreme case, employers would be worried only about false negatives. Most of the costs of false positives, like increased crime due to frustration or lack of legitimate income, are born by agents other than the employer. However, society can make employers feel some of those costs (for example, through the threat of Title VII litigation). And employers can also have direct costs if they cannot find enough qualified employees. Indeed, there is evidence that employers do willingly hire ex-offenders. (8)

    I am aware of no literature that tries to quantify the acceptable tradeoff between false positives and false negatives by employers. In the second half of the paper, I conduct an exercise to back out the implied "acceptable" ratio of false negatives to false positives that is implied by an employer which uses a prison record as a predictor for homicide. I find that employers who decide not to hire people with prison records would have to argue, at minimum, that 930 false positives have the same cost as one false negative.

  2. ESTIMATING THE IMPLIED "BLACKSTONE RATIO" IN MURDER CASES

    There are two good outcomes for any criminal case--the guilty are convicted, and the innocent are not convicted. But there are two errors that can and will be committed in any system of justice--false negatives and false positives. Part of the value of the Blackstone ratio, in my view, is the explicit recognition that wrongful convictions are not the only errors that are committed in any system. The only way to assure no false positives would be to fail to convict anyone.

    The Blackstone ratio also implies that the ratio in any given system of justice is open to manipulation. Conceivably, policymakers could set certain rules and standards that would change the "justice" outcome to achieve the desired "Blackstone Ratio," holding constant the kinds of technological innovations that could conceivably reduce both forms of error simultaneously.

    However, this idea is hypothetical at this point because there are few good estimates of the amount of errors made in the system. Through the advent of DNA testing, there has been a recent burst of identifying (and correcting) wrongful convictions. Although some are skeptical that good estimates are even possible, others believe that good strategies exist for generating reliable estimates of wrongful convictions in the system. A recent detailed effort found that 3.3% of those convicted in capital death cases were factually innocent, (9) and there is some consensus from recent reviews that one to five percent of all felony convictions are of factually innocent people. (10)

    Within this literature, there is a consistent effort to differentiate the "factually innocent" from those who are acquitted and might actually be guilty. However, I found no explicit attempt to estimate the number or rate of false negatives. (11) But these two errors are linked systematically and unavoidably. Identifying one without the other captures only half of the error story, and could lead to changes in policy that dramatically increase the number of guilty people who are not convicted.

    In the following, I make use of a little known study of police investigative techniques, based on crimes committed in 1979, to conduct a simple exploration of false negatives for murder in Chicago. (12) The authors of the study randomly sampled seventy-two murders in Chicago that were reported to the police in 1979. (13) There were 856 murder victims reported to the police in 1979, (14) so this represents 8.4% of the victim population. The authors collected information from the police about the investigation, as well as court data on the final outcome for each suspect. (15)

    The goal of this section is to fill in the decision grid for the individuals who committed these murders. The decision grid is provided in Table 1. The columns represent the decision of the criminal justice system, and the rows involve the factual innocence or guilt of the people.

    Although a more nuanced decision table is possible, this table represents the simplest possible description of the problem. The most important thing to note about this table is that it involves all people who murdered the victims in these cases, not just those suspects who go to trial or are arrested.

    A key fact of the criminal justice system is the high degree of selection from arrest to conviction. This selection is not random, but driven in large part by the probability of conviction. (16) The theory of "bargaining in the shadow of [the] trial" is just one of the most obvious examples of this logic, where legal theorists argue that pleas are driven by what happens at the trial. (17) The implication here is that upstream actors in the criminal justice system know the standards of conviction and will not bring forward cases, or even make arrests, that will not meet this standard. Because this is true, the universe of cases that have an arrest, or go to trial, are a highly selected subsample of cases. Starting at arrest, rules that exist to prevent false positives will have the potential to create false negatives by eliminating guilty people from the criminal justice system against whom there is simply not enough evidence under the current standard of proof. Ignoring these people necessarily undercounts the number of false negatives. Thus, a reasonable alternative is to start with the number of people who commit the crime, and focus on the conviction decision.

    A priori, there are some elements of Table 1 which will not be easily knowable. For example, F, the total number of people who did not commit the crime, is a difficult conceptual number. Clearly, any number of people in Chicago did not commit the crime, so this number could be arbitrarily large. Conceptually, it might be cleaner to think of this as a list of "persons of interest" who could have plausibly committed the crime, but did not. It might be possible to make a list of "persons of interest" for the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT