Bias In, Bias Out.

Author:Mayson, Sandra G.
 
FREE EXCERPT

ARTICLE CONTENTS INTRODUCTION 2221 I. THE IMPOSSIBILITY OF RACE NEUTRALITY 2227 A. The Risk-Assessment-and-Race Debate 2227 B. The Problem of Equality Trade-offs 2233 C. Charting Predictive Equality 2238 1. Disparate Treatment (Input-Equality) Metrics 2240 2. Disparate Impact (Output-Equality) Metrics 2241 a. Statistical Parity 2242 b. Predictive Parity 2243 c. Equal False-Positive and True-Negative Rates 2243 (Equal Specificity) d. Equal False-Negative and True-Positive Rates 2244 (Equal Sensitivity) e. Equal Rate of Correct Classification 2245 f. Equal Cost Ratios (Ratio of False Positives to 2245 False Negatives) g. Area-Under-the-Curve (AUC) Parity 2246 D. Trade-offs, Reprise 2248 1. Equality/Accuracy Trade-offs 2249 2. Equality/Equality Trade-offs 2249 II. PREDICTION AS A MIRROR 2251 A. The Premise of Prediction 2251 B. Racial Disparity in Past-Crime Data 2251 C. Two Possible Sources of Disparity 2254 1. Disparate Law Enforcement Practice? 2255 2. Disparate Rates of Crime Commission? 2257 3. The Broader Framework: Distortion Versus Disparity 2259 in the Event of Concern III. NO EASY FIXES 2262 A. Regulating Input Variables 2263 B. Equalizing (Some) Outputs 2267 1. Equalizing Outputs to Remedy Distortion 2268 2. Equalizing Outputs in the Case of Differential 2270 Offending Rates a. Practical Problems 2271 b. Conceptual Problems 2272 C. Rejecting Algorithmic Methods 2277 IV. RETHINKING RISK 2281 A. Risk as the Product of Structural Forces 2282 B. Algorithmic Prediction as Diagnostic 2284 C. A Supportive Response to Risk 2286 1. Objections 2287 2. Theoretical Framework 2288 3. Examples 2290 D. The Case for Predictive Honesty 2294 CONCLUSION 2296 APPENDIX: THE PRACTICAL CASE AGAINST ALGORITHMIC AFFIRMATIVE 2298 ACTION--AN ILLUSTRATION INTRODUCTION

"There's software used across the country to predict future criminals. And it's biased against blacks." (1) So proclaimed an expose by the news outlet ProPublica in the summer of 2016. The story focused on a particular algorithmic tool, COMPAS, (2) but its ambition and effect was to stir alarm about the ascendance of algorithmic crime prediction overall.

The ProPublica story, Machine Bias, was emblematic of broader trends. The age of algorithms is upon us. Automated prediction programs now make decisions that affect every aspect of our lives. Soon such programs will drive our cars, but for now they shape advertising, credit lending, hiring, policing--just about any governmental or commercial activity that has some predictive component. There is reason for this shift. Algorithmic prediction is profoundly more efficient, and often more accurate, than is human judgment. It eliminates the irrational biases that skew so much of our decision-making. But it has become abundantly clear that machines too can discriminate. (3) Algorithmic prediction has the potential to perpetuate or amplify social inequality, all while maintaining the veneer of high-tech objectivity.

Nowhere is the concern with algorithmic bias more acute than in criminal justice. Over the last five years, criminal justice risk assessment has spread rapidly. In this context, "risk assessment" is shorthand for the actuarial measurement of some defined risk, usually the risk that the person assessed will commit future crime. (4) The concern with future crime is not new; police, prosecutors, judges, probation officers, and parole officers have long been tasked with making subjective determinations of dangerousness. The recent shift is from subjective to actuarial assessment. (5) With the rise of big data and bipartisan ambitions to be smart on crime, algorithmic risk assessment has taken the criminal justice system by storm. It is the linchpin of the bail-reform movement; (6) the cutting edge of policing; (7) and increasingly used in charging, (8) sentencing, (9) and allocating supervision resources. (10)

This development has sparked profound concern about the racial impact of risk assessment. (11) Given that algorithmic crime prediction tends to rely on factors heavily correlated with race, it appears poised to entrench the inexcusable racial disparity so characteristic of our justice system, and to dignify the cultural trope of black criminality with the gloss of science. (12)

Thankfully, we have reached a moment in which the prospect of exacerbating racial disparity in criminal justice is widely understood to be unacceptable. And so, in this context as elsewhere, the prospect of algorithmic discrimination has generated calls for interventions in the predictive process to ensure racial equity. Yet this raises the difficult question of what racial equity looks like. The challenge is that there are many possible metrics of racial equity in statistical prediction, and some of them are mutually exclusive. (13) The law provides no useful guidance about which to prioritize. (14) In the void, data scientists are exploring different statistical measures of equality and different technical methods to achieve them. (15) Legal scholars have also begun to weigh in. (16) Outside the ivory tower, this debate is happening in courts, (17) city-council chambers, (18) and community meetings. (19) The stakes are real. Criminal justice institutions must decide whether to adopt risk-assessment tools and, if so, what measure of equality to demand that those tools fulfill. They are making these decisions even as this Article goes to print. (20)

Among racial-justice advocates engaged in the debate, a few common themes have emerged. (21) The first is a demand that race, and factors that correlate heavily with race, be excluded as input variables for prediction. The second is a call for "algorithmic affirmative action" to equalize adverse predictions across racial lines. To the extent that scholars have grappled with the necessity of prioritizing a particular equality measure, they have mostly urged stakeholders to demand equality in the false-positive and false-negative rates for each racial group, or in the overall rate of adverse predictions across groups ("statistical parity"). Lastly, critics argue that, if algorithmic risk assessment cannot be made meaningfully race neutral, the criminal justice system must reject algorithmic methods altogether. (22)

This Article contends that these three strategies--colorblindness, efforts to equalize predictive outputs by race, and the rejection of algorithmic methods--are at best inadequate, and at worst counterproductive, because they ignore the real source of the problem: the nature of prediction itself. All prediction functions like a mirror. Its premise is that we can learn from the past because, absent intervention, the future will repeat it. Individual traits that correlated with crime commission in the past will correlate with crime commission in future. Predictive analysis, in effect, holds a mirror to the past. It distills patterns in past data and interprets them as projections of the future. Algorithmic prediction produces a precise reflection of digital data. Subjective prediction produces a cloudy reflection of anecdotal data. But the nature of the analysis is the same. To predict the future under status quo conditions is simply to project history forward.

Given the nature of prediction, a racially unequal past will necessarily produce racially unequal outputs. To adapt a computer-science idiom, "bias in, bias out." (23) To be more specific, if the thing that we undertake to predict--say arrest--happened more frequently to black people than to white people in the past data, then a predictive analysis will project it to happen more frequently to black people than to white people in the future. The predicted event, called the target variable, is thus the key to racial disparity in prediction.

The strategies for racial equity that currently dominate the conversation amount to distorting the predictive mirror or tossing it out. Consider input data. If the thing we have undertaken to predict happens more frequently to people of color, an accurate algorithm will predict it more frequently for people of color.

Limiting input data cannot eliminate the disparity without compromising the predictive tool. The same is true of algorithmic affirmative action to equalize outputs. Some calls for such interventions are motivated by the well-founded belief that, because of racially disparate law enforcement patterns, arrest rates are racially distorted relative to offending rates for any given category of crime. But unless we know actual offending rates (which we generally do not), reconfiguring the data or algorithm to reflect a statistical scenario we prefer merely distorts the predictive mirror, so that it reflects neither the data nor any demonstrable reality. Along similar lines, calls to equalize adverse predictions across racial lines require an algorithm that forsakes the statistical risk assessment of individuals in favor of risk sorting within racial groups. And wholesale rejection of algorithmic methods rejects the predictive mirror directly.

This Article's normative claim is that neither distorting the predictive mirror nor tossing it out is the right path forward. If the image in the predictive mirror is jarring, bending it to our liking does not solve the problem. Nor does rejecting algorithmic methods, because there is every reason to expect that subjective prediction entails an equal degree of racial inequality. To reject algorithms in favor of judicial risk assessment is to discard the precise mirror for the cloudy one. It does not eliminate disparity; it merely turns a blind eye.

Actuarial risk assessment, in other words, has revealed the racial inequality inherent in all crime prediction in a racially unequal world, forcing us to confront a much deeper problem than the dangers of a new technology. In making the mechanics of prediction transparent, algorithmic methods have exposed the disparities endemic to all criminal justice risk assessment, subjective and...

To continue reading

FREE SIGN UP