Automating Recidivism Risk Assessment

AuthorKenneth C. Land
DOIhttp://doi.org/10.1111/1745-9133.12271
Date01 February 2017
Published date01 February 2017
EDITORIAL INTRODUCTION
RECIDIVISM RISK ASSESSMENT
Automating Recidivism Risk Assessment
Should We Stay or Should We Go?
Kenneth C. Land
Duke University
An increasingly salient task for practitioners in corrections is recidivism risk assess-
ment of prisoners. Recent decades have experienced a move from more qualitative,
clinical judgment to standardized risk assessment tools of the probability that re-
leased offenders commit new crimes. This trend has been fostered by advances in measure-
ment methodologies and associated scales, as well as by the increasing sizes of correctional
populations. Limited criminal justice system capacities also mean that agencies increasingly
use risk assessment instruments to allocate resources to offenders who are assessed as having
the highest risk to society.
Because the methods of scoring and the instruments themselves vary widely, research is
needed on the approaches that lead to the most reliable and valid outcomes. As a response
to this need, Grant Duwe and Michael Rocque (2017, this issue) study the relationship
between reliability and validity with data from the Minnesota Screening Tool Assessing
Recidivism Risk (MnSTARR), a risk assessment instrument the Minnesota Department
of Corrections (MnDOC) developed and began using in 2013. By using follow-up data
on offenders released in 2014 and manual MnSTARR assessments scored by MnDOC
staff, Duwe and Rocque assess the impact of inter-rater reliability (IRR) on predictive
performance (validity). They also compare the reliability of a manual scoring process with
an automated one.
Duwe and Rocque (2017) find the MnSTARR was scored with a high degree of
consistency by MnDOC staff and that intraclass correlation (ICC) values were at high
levels, which they attribute to the instrument comprising mostly objective rather than
subjective risk factors. Even with high IRR values on the manually scored instruments, they
also report that (a) the automated assessments significantly outperformed those that had
been scored manually and, as might be expected, (b) that the more inter-rater disagreement
Direct correspondence to Kenneth C. Land, Department of Sociology, Duke University, Box 90088, Durham,
NC 27708-0088 (e-mail: kland@soc.duke.edu).
DOI:10.1111/1745-9133.12271 C2017 American Society of Criminology 231
Criminology & Public Policy rVolume 16 rIssue 1

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT