HAVING YOUR DAY IN ROBOT COURT.

AuthorChen, Benjamin Minhao

TABLE OF CONTENTS I. INTRODUCTION 128 II. AUTOMATING THE JUDICIARY AND PROCEDURAL LEGITIMACY 132 III. TWO EXPERIMENTAL STUDIES 140 A. Study 1 141 1. Experimental Scenarios 141 2. Experimental Treatments 144 3. Hypotheses 146 4. Data and Analysis 148 B. Study 2 154 1. Scenario, Experimental Treatments, and Hypotheses 154 2. Data and Analysis 155 3. Accounting for the Perceived Fairness Gap Between Human and Algorithmic Decision-Makers 157 IV. IMPLICATIONS 160 A. The Human-AI Fairness Gap: A Challenge for Robot Judges 162 B. Offsetting the Human-AI Fairness Gap 163 C. Beyond Perceived Fairness: Accuracy, Bias, and Other Factors 165 V. CONCLUSION 168 I. INTRODUCTION

"Can you foresee a day when smart machines, driven with artificial intelligences, will assist with courtroom factfinding or, more controversially even, judicial decision-making?" (1) Shirley Ann Jackson, a college president and theoretical physicist, posed this question to Chief Justice John Roberts in 2017. The Chief Justice's answer? "It's a day that's here." (2)

Artificial intelligence ("AI") already plays a role in the U.S. legal system but has thus far primarily served as an aid. For example, algorithms recommend but do not determine criminal sentences in some states. (3) Elsewhere, AI systems could function as primary decision-makers in some administrative contexts, such as terminating welfare benefits or targeting people for air travel exclusions. (4) Outside the United States, there are plans to give greater judicial decision-making responsibility to machines. (5) Estonia is piloting AI adjudication of some small claims. (6) China has declared the integration of AI into judicial processes a national priority, introducing, for example, precedent recommendation systems that assist human judges by formulating judgments based on past decisions. (7)

As technological advances make robot judging a possibility, challenging value judgments must be made. Perhaps the most critical objection sounds in procedural fairness. Would a judicial proceeding overseen by a robot judge undermine the constitutional right to a fair trial? (8) This concern can be articulated doctrinally: Does robot judging violate the European Convention on Human Rights' fair trial standards or constitutional commitments to due process? (9) The concern can also be articulated in legal-ethical terms. Assuming the doctrinal hurdles are overcome, would people reject robot judging as procedurally unfair?

This Article enters the debate from this second perspective, considering people's judgments of procedural fairness. A long tradition in legal psychology has studied procedural justice in this way. (10) Evidence suggests that the perceived fairness of legal processes has far-reaching practical implications. People obey the law, in part, because it is seen to be fair. (11) The public's assessment of the fairness of robot judges is thus crucial, both for those concerned with legal compliance and those who ascribe intrinsic value to ordinary citizens' conceptions and experiences of fairness.

Fairness and procedural legitimacy are at the heart of modern debates about AI judging. As Campbell puts it, "[i]n asking whether AI can play the role of judges, we must ask... [whether] AI courts can enable public participation, give participants a sense of being fairly heard... [and] vindicate the legitimacy not just of the courts, but of the governmental systems within which they reside." (12)

Richard Re and Alicia Solow-Niederman articulate a similar concern, noting that "the incomprehensibility of an AI adjudicator could pose legitimacy or fairness problems for individuals who are subjects of AI adjudication.... The individual without comprehension might thus experience special or separate [procedural] harms." (13) Even in discussions about alternative dispute resolution, perceived procedural fairness matters. For example, a central criterion in assessing whether computers can "be fair" in online dispute resolution is "disputants' evaluation of the fairness of... [the] process." (14)

Whether people see robot judges as fair is a largely unexplored empirical question. (15) We present evidence of people's evaluation of robot judges' decisions through a series of original experimental studies involving a large sample of U.S. participants. These vignette experiments vary the decision-maker (human or algorithm), scenario (consumer arbitration, bail, or sentencing), whether there is a hearing, and whether the judge's decision is interpretable. (16)

The study makes two significant findings. First, there is a clear human-AI fairness gap: Proceedings conducted by human judges were seen as fairer than those conducted by AI judges. Second, the procedural fairness advantage of human judges seems neither irreducible nor absolute. Remarkably, participants did not evaluate a hearing before an AI judge as meaningless. On the contrary, having the opportunity to speak and be heard increases procedural fairness ratings for both human and AI-adjudicated processes. Our results hint at the possibility of "algorithmic offsetting." That is, the human-AI fairness gap can be offset, partly and perhaps even entirely, by introducing into AI adjudication procedural elements that might be absent from current processes, such as a hearing or an interpretable decision.

Moreover, an exploratory mediation analysis suggests that the human-AI fairness gap is explained by "hard" factors, like the perceived accuracy and thoroughness of the decision-making process, more so than by distinctively human, "soft" factors, like the decision-maker's understanding of the litigant's position or a feeling that the litigant had a voice. This finding suggests that in domains where quantitative information about a decision's accuracy is available, the superior accuracy of algorithms may eventually erode or even eliminate the fairness gap.

The final Part of the Article develops implications from these findings. We elaborate on the idea of algorithmic offsetting: closing the human-AI fairness gap by issuing AI decisions that are more interpretable than human-rendered decisions, or by offering litigants a meaningful hearing before an AI judge when they would not have had such an opportunity in a human-adjudicated proceeding. The empirical results indicate that people evaluate AI judging under such circumstances as being as procedurally fair as human judging. And, as Eugene Volokh puts it, "[o]ur question should not be whether AI judges are perfectly fair, only whether they are at least as fair as human judges." (17)

It might seem that "having your day in court" requires being heard before a human judge, and anything else is unfair. Insofar as human judges set the standard for fairness, our results imply that the procedural justice objection to robot judges may not be decisive. Were robot judges to become more accurate, comprehensive, interpretable, or responsive, their decision-making might even be seen as fairer than that of human judges in some situations.

  1. AUTOMATING THE JUDICIARY AND PROCEDURAL LEGITIMACY

    Should machines decide cases? While commentators describe the rise of AI in epochal terms, the thought that robots might one day settle legal disputes is hardly new. In 1977, human rights scholar Anthony D'Amato mused that computers might replace judges, assuming that "the law has been made completely determinable" and automation would eliminate discretion in judicial decision-making. (18) But law has not become completely determinable. Nor is it likely to. Legal language is "open-textured," (19) and the rivalry between textualism, intentionalism, and purposivism persists in statutory interpretation. (20) Meanwhile, the evaluative nature of many common law concepts means that applying old wisdom to new problems remains an exercise in normative reasoning. Instead of repudiating human judgment, state-of-the-art computers strive to replicate it. (21) Modern algorithms identify and harness empirical relationships more effectively than their predecessors by leveraging greater computing power and more flexible modeling strategies. (22)

    Simple models have already outperformed lawyers in predicting decisions of the U.S. Supreme Court, (23) and more sophisticated models are now boasting impressive accuracy for a diverse range of tribunals. (24) Their apparent success has excited interest in the possibility of faster, cheaper, and better justice delivered by robot judges.

    The role of AI in American criminal law remains very much advisory--legal judgment continues to be delivered by judges sitting in courtrooms. (25) But in the United Kingdom, public law barrister Lord Pannick has wondered "whether consistency in sentencing decisions might be promoted, irrelevant factors excluded, and a lot of money saved on sentencing appeals by the use of a computer programme." (26) And while no jurisdiction has to date been bold enough to let an algorithm alone determine a person's guilt or innocence, at least one nation is prepared to let machines resolve some kinds of cases. Estonia is building a system to adjudicate small claims where the amounts in controversy are below [euro]7,000. (27) According to the chief data scientist on the project, Ott Velsberg, the country is hospitable ground for such an experiment given that its 1.3 million residents are accustomed to digitized public services like voting and tax filing. (28)

    These developments raise questions about human adjudication's distinctiveness and its future. From a theoretical perspective, adjudication has never been solely about achieving the correct result. Lon Fuller, for example, characterized adjudication as a form of social ordering distinguished by "the fact that it confers on the affected party a peculiar form of participation in the decision, that of presenting proofs and reasoned arguments for a decision in his favor." (29) Fuller hence reasoned that "[w]hatever heightens the significance...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT