ALGORITHMIC LEGAL METRICS.

AuthorBurk, Dan L.

INTRODUCTION 1148 I. SURVEILLANCE AND PROFILING 1153 A. Big Data Processing 1157 B. Algorithmic Biases 1161 II. ALGORITHMIC REFLEXIVITY 1166 III. ALGORITHMIC MARKF.TS 1171 A. Algorithmic Performativity 1172 B. Calculated Consumers 1176 IV. BIAS AND REFLEXIVITY 1181 A. Perfidious Transparency 1186 B. Quantified Performance 1191 V. CALCULATED CULPABILITY 1196 CONCLUSION 1201 INTRODUCTION

Automated pattern analysis and decisionmaking, colloquially designated as "artificial intelligence" or "AI," is increasingly being deployed to mediate or to assist in social determinations across a range of domains including governance and regulatory decisions. (1) As potential applications for algorithmic legal decisionmaking grow, optimistic visions of such systems foresee the rise of accurate and efficient AI regulators, free from the errors of human decisionmakers. (2) More pessimistic visions foresee the imposition of impersonal and regimented machine discipline on an unsuspecting populace. (3)

Despite the confluence of such algorithmic hope and dread, both public and private legal functions are increasingly the subjects for algorithmic provision. (4) Predictive algorithms have been deployed to identify families at risk of abusive behavior, in order to mobilize social services intervention before actual harm occurs. (5) Predictive algorithms have been relied upon to assess the threat of criminal recidivism, and so determine the allowance for bail or for prisoner parole. (6) Predictive algorithms are being incorporated into policing strategies, allowing law enforcement resources to be positioned where criminal activity is anticipated to occur. (7 And algorithmic predictions are becoming progressively arrayed across a broad swath of other legal and social decisionmaking: to allocate public assistance, (8) to preempt customs and border violations, (9) to determine immigration status, (10) to forecast threats to national security. (11)

Emerging proposals suggest an even greater role for algorithmically determined legal metrics. The confluence of massively multisourced consumer surveillance and machine learning technologies has led to proposals for algorithmically mediated "personalized law" in a variety of public and private law areas. (12) Specifically, recent scholarship has suggested that the collection of detailed information on consumers, together with algorithmic processing of such data, will allow for customized tailoring of legal imperatives to the capacity or characteristics of individual actors. (13) This body of work argues that legal directives could be matched to detailed consumer profiles so as to create metrics that are "personalized" for the profiled individual, rather than uniform for the general populace. (14) Rather than evaluating the standard of care for a hypothetical reasonably prudent person, tort law could perhaps algorithmically determine the standard of care for a given accused tortfeasor. (15) Rather than allocate inheritance according to default intestacy rules, estate law could perhaps devise assets according to the algorithmically predicted preferences of a given decedent. (16) Proposals of this sort have been circulated for a variety of other legal regimes, including contract, criminal law, and copyright. (17)

Relying as they do on mechanisms of consumer surveillance, these proposals are effectively intended to translate the growing provision of mass "personalization" or "modulation" of market services and institutions to the provision of legal services and institutions. (18) Although such proposals for personalized legal metrics carry a degree of superficial plausibility, on closer inspection it becomes clear that they combine the worst defects of idealized economic analysis and simplistic algorithmic utopianism. (19) Such proposals display a breathtaking degree of naivete regarding the workings of algorithmic classification, not merely regarding the limitations of the technical infrastructure on which such classifications would rest, (20) but regarding the characteristics of the social infrastructure on which such classifications depend. An increasingly robust sociological literature demonstrates that algorithmic scoring effectuates "classification situations" that recreate and reinforce existing social orders, accelerating some of the most problematic mechanisms for exploitation and inequality. (21) Such metrics not only amplify and reinforce existing social biases, but tend to produce detrimental self-surveillance. Due to such effects, the quantified assessments supplied by algorithmic scoring are not neutral, but take on normative and moral connotations.

As yet, the legal policy discussions on algorithmic decisionmaking have taken little note of this work. But given the growing literature demonstrating the perverse social effects of algorithmic scoring systems, it seems clear that the incorporation of such metrics into the determination of legal status offers a new and troubling challenge to the rule of law. Legal determinations such as tort liability or criminal culpability that carry their own moral weight are likely to produce unintended consequences when associated with morally charged algorithmic metrics. A close examination of these mechanisms quickly illuminates disjunctions at the intersection among jurisprudence, automated technologies, and socially reflexive practices, and alerts us to areas of concern as legal institutions are increasingly amalgamated into the growing algorithmic assemblage.

The existing legal literature has only begun to touch the most evident issues regarding algorithmic governance. Following the categorical framework laid out by Lucas Introna, we might divide the issues of governance, and the legal literature addressing such issues, into three groupings. (22) The first of these categories concerns governance by algorithms, that is, the effects of deploying automated systems to administer legal and regulatory oversight. (23) Related to this set of questions, we can discern another emerging literature addressing a second set of issues around governance of algorithms, that is, the problems and issues related to oversight of algorithms that are deployed in both public and private sectors. (24) Under the first set of questions, we want to investigate whether automated systems used in governance are likely to promote efficiency, justice, equity, and democratic values. Under the second set of questions, we want to consider how to ensure that the operation of automated systems is fair, accurate, unbiased, and legally compliant. The inqtiiries are clearly related, as for example, in the concern that inaccurate or biased algorithms are unlikely to produce fair or just social outcomes.

Each of these sets of inquiries constitutes a legitimate and important line of investigation, but neither is my primary concern in this Article. Instead, I focus here on a third set of issues that has gone virtually unaddressed in the legal literature. Borrowing a term from Foucault, we may term these to be questions relating to the governrnentality of algorithms, that is, to the mechanisms by which algorithms may fundamentally alter the personal behaviors and social structures with which they interact. (25) Under the first two sets of inquiries, previous commentators have begun to consider whether algorithmic governance comports with the rules and expectations we have for a civil society. But here I hope to address the antecedent question as to when the deployment of algorithmic systems may fundamentally change the rules and expectations by which we live. The question is not whether algorithms can or do fall within the rules; the question is how and whether they make the rules.

Consequently, in this Article, I begin to map out the intersection between the social effects of quantification and the social construction of algorithms in the context of legal decisionmaking. In previous work, I have explored the implications of attempting to incorporate legal standards into algorithms, arguing that the social action typical of algorithmic systems promises to shape and eventually become the legal standard it seeks to implement. (26) Here I essentially consider the inverse proposition: I explore the effects of incorporating algorithms, which is to say algorithmic metrics, into legal standards. In particular, I will examine the anticipated use of algorithmically processed "Big Data" in attempting to align legal incentives with social expectations.

I begin my examination by sketching the features of the sprawling and uncoordinated data gathering apparatus, or "surveillant assemblage," from which the profiles for algorithmic processing are extracted. I particularly highlight the distortions introduced into data profiles by the processing, by the social context, and by the inevitable interpretation associated with algorithmic metrics. A number of previous commentators have been properly concerned about the biases endemic to data profiling, but I argue that algorithmic bias goes well beyond the previous discussions of prejudice or inaccuracy to shape and define the social relationships and behavior surrounding the subjects of algorithmic data profiling.

Beginning in Part II, I locate the source of such distortions in reflexive social practices that are embodied in algorithmic measurements, and with which algorithmic processes interact in a broader structural context. Although there are numerous case studies documenting such effects, I employ familiar illustrations drawn from the intensively studied examples of commensurate credit scoring and law school ranking. The reflexive effects found in such algorithmic processes are well known in the existing social science literature, but are now accelerated and amplified by the speed and scale of automated data analysis and processing. In Part III, I argue that due to such processes, algorithmic metrics are performative, in the sense that they...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT