ACCURACY IS NOT ENOUGH: THE TASK MISMATCH EXPLANATION OF ALGORITHM AVERSION AND ITS POLICY IMPLICATIONS.

AuthorLowens, Ethan

TABLE OF CONTENTS I. INTRODUCTION 259 II. EXISTING EXPLANATIONS OF ALGORITHM AVERSION: THE INACCURACY EXPLANATION AND THE CONFUSION EXPLANATION 261 III. THE TASK MISMATCH EXPLANATION OF ALGORITHM AVERSION 263 IV. EMPIRICAL STUDY ON THE TASK MISMATCH EXPLANATION OF ALGORITHM AVERSION 265 A. Background on Pre-Trial Detention Decisions 265 B. Survey Design 266 C. Hypotheses 267 D. Limitations and Assumptions 268 E. Results 268 F. Analysis of Results 269 V. Policy Implications 270 A. Public Policy Responses to Algorithm Aversion 271 B. Implications for Algorithm Advocates, Detractors, and Designers 273 VI. CONCLUSION 274 APPENDIX 275 I. INTRODUCTION

Humans are poor and inconsistent forecasters. We have limited memory and processing power and are deceived by cognitive defects and artifacts. It is not surprising that algorithms often do better. (1) What is puzzling is that people prefer to rely on human forecasters even when they are given overwhelming evidence that an algorithm would be more accurate. That phenomenon has inspired a wave of research on the drivers of "algorithm aversion."

Existing research dismisses algorithm aversion as the irrational consequence of cognitive biases. It appends algorithm aversion to the ever-expanding list of documented human cognitive defects like the tendency to value an object more when it is in one's possession than when it is not, (2) or for local news viewers to believe that crime is more prevalent in their neighborhoods than it really is. (3) This view suggests that popular outcry against an algorithm deserves little, if any, deference.

I argue that the story is not so simple. Prior studies examined algorithm aversion in a situation designed so that a human and an algorithm perform exactly the same task. In reality, such a situation is rare, if it exists at all. Aversion to an algorithm replacing humans in the real world may result from an intuition or observation that the algorithm lacks important capabilities. The algorithm's shortcoming may be technical, failing to account for key variables or malfunctioning under certain conditions. Alternatively, it may be metaphysical: virtually every task performed by a human involves some element of discretion or human touch that an algorithm cannot emulate.

At the core of this article is an empirical study which finds that a perceived mismatch between the task performed by a human and the capability of an algorithm poised to replace her drives respondents' aversion to the algorithm. I call this the "task mismatch" explanation of algorithm aversion.

It follows from the results of this study that policymakers should not systematically dismiss algorithm aversion as irrational. Popular outcry against an algorithm, motivated by a perceived task mismatch, may signal that adopting the algorithm would have unintended consequences. This signal is especially valuable where policymakers do not have personal experience in the context they are regulating--for instance, navigating the immigration system or enrolling for state-sponsored nutrition, healthcare, or housing benefits. Then, such a popular response may be the only way to detect a task mismatch.

The paper proceeds as follows: In Part II, I review past research on algorithm aversion. In Part III, I introduce the task mismatch explanation of algorithm aversion. In Part IV, I report results from an empirical study. In the study, participants learn about an algorithm that predicts whether a criminal defendant will fail to appear for trial with far greater accuracy than human judges. They are then asked to decide whether judges or the algorithm should decide if criminal defendants should be released before trial. The study shows that a considerable portion of people who expressed algorithm aversion were wary of a task mismatch between judges and their potential algorithmic replacement. In Part V, I discuss two implications of my findings. First, the task mismatch explanation for algorithm aversion documented in this paper, along with past research, creates a roadmap for policymakers to interpret and respond to algorithm aversion. Second, advocates and detractors of algorithms can leverage task-mismatch-driven algorithm aversion to influence popular opinion toward an algorithm.

  1. EXISTING EXPLANATIONS OF ALGORITHM AVERSION: THE INACCURACY EXPLANATION AND THE CONFUSION EXPLANATION

    Algorithms outperform human forecasters in myriad contexts. A meta-analysis of 136 studies between 1944 and 1994 found that, with only eight exceptions, algorithmic forecasters were as accurate, or more accurate, than human forecasters. (4) Since then, computing power and machine learning have improved, increasing algorithms' sophistication and accuracy. Moreover, algorithmic forecasters have additional advantages over humans: They are often more economical (5) and exceedingly consistent. (6)

    Yet, given the choice, people often prefer to rely on human forecasters. (7) This observation gave birth to the term "algorithm aversion" and an academic quest to determine its underlying causes. Recent empirical studies have focused on scenarios where participants may choose to rely on an algorithmic or human forecaster to perform exactly the same task: making a prediction about a future event. (8) Participants then receive evidence that the algorithm is a more accurate forecaster, and yet, to their detriment, most opt against relying on the algorithm.

    One explanation for this phenomenon is that people wrongly perceive algorithms as less accurate, in spite of evidence that they are more accurate (the "inaccuracy explanation"). Dietvorst et al. demonstrate a mechanism behind this explanation: People display greater intolerance for error from algorithms than from humans. (9) If people see an algorithm make mistakes, they dismiss the algorithm as flawed. (10) When they see a human err, they are willing to give it another chance, believing he or she will learn. (11) In the Dietvorst et al. study, participants chose whether to rely on a human forecaster (either themselves or an anonymous third party) or an algorithm to predict the academic performance of MBA students based on their admissions files. (12) They received cash compensation for accurate predictions. (13) After seeing the algorithm perform (and make some mistakes), the vast majority (74%) of participants chose to rely on a human forecaster. (14) They did so in spite of the fact that they also observed that the algorithm was, on the whole, more accurate than the human forecaster. (15) Their tactic was costly--for most participants, relying on the algorithm would have resulted in considerably higher payments. (16)

    In a subsequent study, Yeomans et al. identified a different driver of algorithm aversion: People's distrust for algorithms may stem from a lack of understanding of how they work (the "confusion explanation"). (17) In Phase 1 of the Yeomans et al. study, participants predicted how funny a counterpart (the "target") would find a set of jokes. (18) Before the participant made her predictions, the target had previously rated a list of twelve jokes. (19) The participant had the opportunity to calibrate her predictions by seeing the target's ratings of four of these jokes. (20) Then, the participant predicted the target's ratings on the other eight jokes. (21) Perhaps surprisingly, an algorithm's predictions were consistently more accurate than the human participants', even when the target and participant were close friends or relatives. (22) Subsequently, participants were told that they could rely on help from the exceedingly accurate algorithm and to their detriment, many refused the offer. (23) In a variation of the study, people accepted help at higher rates after they read an explanation of how the algorithm worked. (24)

    The Dietvorst et al. and Yeomans et al. studies examine controlled scenarios where a human and an algorithm perform the exact same task--make a prediction--and participants have concrete evidence that the algorithm is the superior predictor. Choosing to rely on a human forecaster under these circumstances is irrational: It clearly conflicts with participants' interest in making the best predictions. And yet, most did so anyway. These studies provide convincing evidence that cognitive defects, presented as the inaccuracy explanation and the confusion explanation, contribute to irrational algorithm aversion in carefully controlled, laboratory settings.

  2. THE TASK MISMATCH EXPLANATION OF ALGORITHM AVERSION

    However, it would be a mistake to generalize from Dietvorst et al.'s and Yeomans et al.'s findings that all algorithm aversion is irrational or the product of cognitive defects. Unlike the contrived situations in these studies, in many, if not most instances where algorithms are poised to replace human actors, the humans are not merely prediction machines.

    People may spurn an algorithm when they perceive that it does not perform the same function as the human it is poised to replace; in other words, where they perceive a task mismatch. Consider the following hypothetical: Michelle must choose between an algorithm and a human to complete Task X. Task X is traditionally performed by humans. Predicting Y is a necessary component of Task X. Michelle knows that the algorithm is extremely accurate at predicting Y--considerably more accurate than any person. She also completely understands how the algorithm works. If Michelle perceives that there is more to Task X than predicting Y, it could be reasonable for her to pick the human over the accurate, but misplaced, algorithm.

    We can construct a concrete illustration of a task mismatch by drawing on terminology in the Yeomans et al. article. The paper is titled "Making Sense of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT