A REPLACEMENT FOR JUSTITIA'S SCALES? MACHINE LEARNING'S ROLE IN SENTENCING.

Author:Donohue, Michael E.
 
FREE EXCERPT

TABLE OF CONTENTS I. INTRODUCTION 657 II. THE MACHINE AS MANIPULATOR 660 A. Recidivism-Risk Tools 660 B. The Risk of Anchoring on a Recidivism Score 661 1. Objections to Population-Level Input Data 662 2. Objections to an Opaque and Proprietary Tool 664 3. Objections to Anchoring on a Single Philosophy of 665 Punishment III. THE MACHINE AS MASTER 667 A. The U.S. Sentencing Guidelines 667 B. The Trouble with Replacing Discretion 669 IV. THE MACHINE AS MENTOR 672 A. An SIS-Enhanced Common Law of Sentencing 673 B. A Machine-Learning-Powered Dialog 675 1. Mitigating Cognitive Biases 676 2. Enabling Conversation 677 V. CONCLUSION 678 I. INTRODUCTION

TO BELIEVE THAT THIS JUDICIAL EXERCISE OF JUDGMENT COULD BE AVOIDED BY FREEZING "DUE PROCESS OF LAW"... IS TO SUGGEST THAT THE MOST IMPORTANT ASPECT OF CONSTITUTIONAL ADJUDICATION IS A FUNCTION FOR INANIMATE MACHINES AND NOT FOR JUDGES.... EVEN CYBERNETICS HAS NOT YET MADE THAT HAUGHTY CLAIM. --JUSTICE FELIX FRANKFURTER (1) Criminal sentencing is one of the most difficult responsibilities of judging. (2) It is a different sort of task than the others that face a judge; unlike deciding upon motions or policing the arguments of counsel, sentencing comes down to a singular moment of moral judgment shared between the robed jurist and the defendant standing before the bench. (3)

The task of sentencing is hard because judges face multiple and conflicting instructions from the legislature and society. The sentence must exact proportional retribution for the wrong committed. It must deter the defendant from offending again, as well as others from offending in the first place. The sentence must be long enough to protect society from danger. And, perhaps, the sentence must be of a suitable length and type to rehabilitate the defendant for re-entry into society after punishment. (4) Only occasionally do these instructions point in the same direction, and one judge's interpretation of where they point will differ from others', threatening uniformity across chambers and jurisdictions. As an additional complicating factor, the judge, often a lawyer by training, has limited information about the defendant and the crime in question. At the time of sentencing, the judge will have only experienced a handful of hearings, including, if there is no plea agreement, a trial focused on determining guilt; the judge has even less information on the impact of any possible sentence. (5)

To ease this process--and to ensure to some degree that the judiciary acts as an agent of the legislature's will--legislatures have created a number of tools to quantify the punishment any given defendant deserves. Some have promulgated guidelines as a framework--or mandate--for judges to use in sentencing, (6) and researchers have recommended evidence-based sentencing practices to better understand which defendants are most likely to pose a future danger to society.

Some have sought to apply the latest capabilities in data analysis and processing--machine learning--to this task. (7) Despite the promises of these techniques and technologies, however, all have met with criticism from both defendants and judges. (8)

What explains the criticism for these tools, especially the ones based on machine learning? After all, they have the capability to dispassionately apply the law in every case. They can be, at least theoretically, programmatically blinded to factors that are impermissible to consider. (9) Legislatures can imbue these tools with precise weights and algorithms for consideration of the facts. Moreover, once we determine why we recoil from using these capabilities, what do we do about it?

We are uncomfortable with using these capabilities because we are uncomfortable with how the tools we create interfere with and replace the discretion of human judges. But these tools hold great promise if we can find ways for them to assist judges in their exercise of discretion, rather than usurp them. This Note advances that argument by analyzing objections to two different attempts to control judges' discretion: one a creature of computer code and the other a creature of committee. Part I discusses how machine-learning-based recidivism-risk scores in sentencing can manipulate judges by authoritatively anchoring them to only the single sentencing philosophy of incapacitation, a phenomenon this Note terms "philosophy anchoring." This Note further argues that such philosophy anchoring poses a greater threat than a related phenomenon, "starting-point anchoring," which existed in simpler sentencing tools before the emergence of machine-learning-based instruments. Part II explores a different form of algorithmic control through an analysis of the U.S. Sentencing Guidelines during the period in which they were mandatory. The Part finds that the U.S. Sentencing Commission's attempts to come up with a comprehensive and mandatory set of sentencing instructions were met with criticism because they acted as master over judges by removing human discretion entirely. (10)

This Note concludes by suggesting several methods for how these tools could act as mentor and partner to the judiciary. Part III endorses a new attempt at creating a common law of sentencing, using machine-learning-powered data entry and analytics to inform judges of the outcomes and reasoning behind their colleagues' sentencing decisions.

Then, the Part more ambitiously proposes researching ways to create artificial-intelligence-infused assistants for judges to actively combat cognitive biases and create instantaneous dialog among stakeholders.

  1. THE MACHINE AS MANIPULATOR

    One of the more recent applications of machine-learning-based systems to criminal justice is the use of recidivism-risk scores to provide input into sentencing decisions. Although the use of machine learning in the development of risk scores does give rise to several objections, many of those objections are not unique to the use of machine learning or even to the use of risk scores. (11) One objection--anchoring on a computationally determined measure of a philosophy of punishment--does pose a unique concern as it risks a particularly troubling interaction of judge and machine.

    1. Recidivism-Risk Tools

      Tools that attempt to measure the likelihood an offender would violate the law again were first used to determine which inmates to release on parole in the 1930S. (12) These tools used a basic regression model based on race, ethnicity, education, intelligence, and background for their predictions. (13) Equivant, the developers of the machine-learning-based Correctional Offender Management Profiling for Alternative Sanctions tool ("COMPAS"), classifies this as the second generation of risk assessment. (14)

      Through the middle of the twentieth century, social scientists developed more sophisticated frameworks to assess recidivism risk, called "third generation" tools. (15) These tools--like the Level of Service Inventory-Revised ("LSI-R")--used dozens of variables and depended on the services of a professional assessment officer. The officer would both collect data on the offender and conduct an interview. Topics included the offender's social network, family history, and neighborhood. (16) After collecting this data, the officer would produce a risk score. (17)

      Equivant terms the most sophisticated tools, including its own COMPAS, as "fourth generation." (18) These tools use machine learning in their modeling, link directly to government databases, and provide unified computer interfaces for examiners. And, unlike the third-generation tools, they can output an explicit forecast, rather than a score. (19) The tools process a training dataset of inputs (that is, offender characteristics) and outputs (that is, whether the offender offended again), and then, depending on the precise method used, create a model into which new inputs can be entered to generate a forecast for any given offender. (20) A major difference from third-generation tools is that when a forecast is generated, it can be difficult to understand precisely what led to the system's determination. (21)

      These forecast models are not designed to be used in determining post-trial incarceration. (22) Rather, they are designed to be used in determining which defendants should be granted bail during pre-trial proceedings or to be released on parole. (23) But starting with Virginia in 1994, many states have permitted or mandated their use in sentencing, and the judges who pass down the sentence are aware of the risk scores developed during preliminary proceedings as well. (24)

    2. The Risk of Anchoring on a Recidivism Score

      Although federal courts have declined to rule on the use of risk scores in sentencing, several state supreme courts have affirmed their use under state law, state constitutions, and the federal Constitution. (25) The defendant in State v. Loomis presented to the Wisconsin Supreme Court the most comprehensive challenge to date. There, the defendant articulated three objections to the use of COMPAS in deciding his sentence: (1) the proprietary nature of the product prevented him from challenging its accuracy; (2) the product was based on group, rather than individualized data; and (3) the product used unconstitutional inputs. (26) The court rejected each of these criticisms in turn, writing that (1) the defendant could verify COMPAS's inputs and argue against them; (27) (2) the risk score merely guided the discretion of a human decision-maker; (28) and (3) there was no indication the sentencing judge had been swayed by any unconstitutional information. (29) Although the court cautioned trial judges in their use of COMPAS, forbidding them from relying solely on COMPAS when deciding on incarceration and that they be informed of the tool's limitations, the court ultimately upheld the defendant's sentence. (30)

      The Loomis defendant's objections to the use of COMPAS can be grouped into two main categories: first, that the...

To continue reading

FREE SIGN UP