Almost politically acceptable criminal justice risk assessment

Published date01 November 2020
AuthorAyya A. Elzarka,Richard Berk
Date01 November 2020
DOIhttp://doi.org/10.1111/1745-9133.12500
DOI: 10.1111/1745-9133.12500
SPECIAL ISSUE ARTICLE
TACKLING DISPARITY IN THE CRIMINAL JUSTICE SYSTEM
Almost politically acceptable criminal justice risk
assessment
Richard Berk1Ayya A. Elzarka2
1University of Pennsylvania
2Google LLC
Correspondence
RichardBerk, Department of Cr iminology,
McNeil,Hall, University of Pennsylvania,
Philadelphia,PA 19104.
Email:berkr@sas.upenn.edu
Research Summary: In criminal justice risk forecasting,
one can prove that it is impossible to optimize accuracy and
fairness at the same time. One can also prove that usually it
is impossible optimize simultaneously all of the usual group
definitions of fairness. In policy settings, one necessarily
is left with tradeoffs about which many stakeholders will
adamantly disagree. The result is a contentious stalemate.
In this article, we offer a different approach. We do not seek
perfectly accurate and perfectly fair risk assessments. We
seek politically acceptable risk assessments. We describe
and apply a machine learning approach that addresses many
of the most visible claims of “racial bias” to arraignment
data on 300,000 offenders. Regardless of whether such
claims are true, we adjust our procedures to compensate.
We train the algorithm on White offenders only and com-
pute risk with test data separately for White offenders and
Black offenders. Thus, the fitted, algorithm structure is the
same for both groups; the algorithm treats all offenders as if
they are White. But because White and Black offenders can
bring different predictors distributions to the White-trained
algorithm, we provide additional adjustments as needed.
Policy Implications: Insofar as conventional machine
learning procedures do not produce the accuracy and fair-
ness that some stakeholders require, it is possible to alter
conventional practice to respond explicitly to many salient
stakeholder claims evenif t heyare unsuppor ted bythe facts.
Criminology & Public Policy. 2020;19:1231–1257. wileyonlinelibrary.com/journal/capp © 2020 American Society of Criminology 1231
1232 BERK AND ELZARKA
The results can be a politically acceptable risk assessment
tools.
KEYWORDS
fairness, forecasting, machine learning, racial bias, risk assessment
Driven primarily by computer scientists, statisticians, and legal scholars, the literature on fairness for
algorithmic, criminal justice risk assessments is large and rapidly growing (e.g., Berk, Heidari, Jab-
bari, Kearns, & Roth, 2018; Chouldechova, 2016; Corbett-Davies,Pierson, Feller, Goel, & Hug, 2017;
Doleac & Stevenson, 2106; Friedler, Scheidegger,& Venkatasubramanian, 2016; Goel, Rao, & Shroff,
2016; Goel, Shroff, Skeem, & Slobogin, 2018; Hamilton, 2016; Hug, 2019; Kleinberg, Lakkaraju,
Loskovec, Ludwig, & Mullainathan, 2017b; Mayson, 2019; Star, 2014). Many of the issues are com-
plex. In particular, there are provably, inherent tradeoffs between differentkinds of fairness and between
fairness and accuracy (Chouldechova, 2017; Kleinberg et al., 2017a).1Despite well-intended aspira-
tions, you can’t have it all.
Proposed technical solutions typically select one or twokinds of fairness for which a “fair” algorithm
can provided. Other forms of fairness and the fairness tradeoffs are ignored (Corbett-Davies & Goel,
2018). Reductions in accuracy are commonly an afterthought. There is, moreover, no single, dominant
kind of fairness. Different stakeholders stubbornly can hold different and legitimate conceptions of
fairness. Too often, gridlock is the result.
No clear resolution is likely in the near term. Meanwhile, criminal justice decisions will be made for
many thousands of offenders. Various forms of risk assessments commonly will inform those decisions.
In this article, we propose an algorithmic fallback position that might be applied immediately to the
construction of risk assessment tools. Rather than fair risk assessment, we offer, as a demonstration
of concept, politically acceptable risk assessment. Perhaps a politically acceptable risk assessment
approach can break the gridlock.
1A BROADER VIEW OF RISK ASSESSMENT
Much of the controversy over risk assessment conflates several related processes that make informed
discussions extremely difficult. In particular, an “algorithm” often is blamed when by itself it
may introduce no unfairness whatsoever. Both critics and supporters sometimes fail to appreci-
ate that risk algorithms sit within a larger set of pursuits, any of which may be a source of
unfairness.2
Figure 1 provides an overview of the training and use of criminal justice, risk algorithms. The pro-
cess begins with the collection of data, and the management of those data, used to train the algorithm.
For example, an arrest may be recorded on an electronic rap sheet that, in turn, is stored under an
offender’s unique identification number. Such data may be fully accurate and properly curated. Alter-
natively, the data may include information that some stakeholders label as “biased.” Stop-and-frisk
policing, for instance, is commonly blamed for inflated arrests counts attached to individuals from dis-
advantaged neighborhoods even when the scientific evidence can be equivocal(Grogger & Ridgeway,
2006, Ridgeway, 2006). Yet, racial animus can play a role, often at the level of individual police offi-
cers (Ridgeway & MacDonald, 2014). In short, criminal justice data sometimes can be a root cause of
risk assessment unfairness, but the issues can be subtle.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT