AN INSTITUTIONAL VIEW OF ALGORITHMIC IMPACT ASSESSMENTS.

AuthorSelbst, Andrew D.

TABLE OF CONTENTS I. INTRODUCTION 119 II. ALGORITHMIC HARMS AND LIABILITY REGIMES 127 A. The Discriminatory Hiring Algorithm 128 B. The Unexplained Loan Denial 132 C. The Unsafe Medical AI 136 III. ALGORITHMIC IMPACT ASSESSMENTS 139 A. The AIA Models 140 B. The Important Aspects of an AIA 146 1. Early Intervention 146 2. Open-Ended Questions 148 3. Accountability 150 IV. THROUGH AN INSTITUTIONAL LENS 152 A. Collaborative Governance 153 B. Legal Managerialism 162 C. Beyond Compliance Behaviors 169 V. Learning from the Field 176 A. Starting with the Technology 178 B. Looking to Qualitative Empirical Research 179 C. Documentation and Testing Standards 184 D. Ethical Frameworks and Social Impact Assessment 188 VI. CONCLUSION 190 I. INTRODUCTION

In broad strokes, the arguments about the perils and promise of artificial intelligence ("AI") are well-rehearsed. AI can crunch quantities of data that no human can. It promises to find patterns that humans would otherwise miss; to be ever-vigilant where humans have to divide their time; and to be precise, mechanistic, and efficient where humans are arbitrary, sloppy, and biased. (1) AI also brings risks of harmful outcomes due to replication of human bias or other pro-grammed-in biases; (2) errors that result from AI's ignorance of social and cultural contexts; (3) displacement of labor (4) and reduction of the tax base; (5) and difficulties of oversight stemming from a lack of transparency, (6) predictability, (7) and explainability, (8) as well as the transfer of decisionmaking authority from the democratic process to programmers. (9)

Given the power and great potential for harm that AI presents, legal scholars, policymakers, and advocates are looking to possible regulatory responses, including pre-existing remedies in antidiscrimination law, (10) administrative law or due process, (11) and tort law. (12) Scholars in other fields have been working too: looking to build fairer, more interpretable, (13) and reviewable (14) systems; arguing for a better understanding of how algorithmic systems are situated in social contexts; (15) and advocating for public participation in algorithmic governance. (16)

But while AI's problems are recognized generally, many of the specifics are still not understood. We are still not able to predict in detail whether a particular AI is likely to be more or less biased than humans; what makes it so; and how the answers may vary across different contexts, such as policing, employment, credit, or public benefits. The public does not have insight into how specific decisions that firms make when designing or implementing AI systems affect their downstream results, or how--or, indeed, if--firms are measuring or addressing those impacts. We do not know how policy goals are translated into algorithmic systems, or the political choices that the algorithmic systems actually represent. (17) Because almost all AI systems--even those used in the public sector (18)--are developed privately and secretly, (19) the public knows very little about them. (20)

For this reason, one regulatory approach that has gained favor in recent years is regulation requiring Algorithmic Impact Assessments ("AIAs"). (21) The impact assessment approach has two principal goals. The first is to get the people who build systems to think methodically about the details and potential impacts of a complex project before its implementation, thereby heading off risks before they become too costly to correct. (22) As proponents of values-in-design have argued for decades, the earlier in project development that social values are considered, the more likely that the end result will reflect those social values. (23) The second goal is to create and provide documentation of the decisions made during development and their rationales, which in turn can lead to better accountability for those decisions and useful information for future policy interventions.

Since the passage of the National Environmental Policy Act ("NEPA") in 1969, (24) impact assessments have been a commonly replicated tool, used in a wide variety of contexts: environmental, (25) sentencing, (26) privacy, (27) human rights, (28) data protection, (29) police technology, (30) surveillance, (31) and--in Canada, where the AIA is already a reality--algorithmic decisionmaking. (32) They are used extensively at all levels of government. (33) And although NEPA originally intended impact assessments for the public sector, because the law was held to apply to any project that requires federal funding or permitting, the private sector has been conducting them for just as long as governments. (34) In the decades since NEPA's enactment, a field of "social impact assessment" has arisen with the aim of developing impact assessment methods and methodologies within the private sector. (35)

Impact assessments are most useful when projects have unknown and hard-to-measure impacts on society, when the people creating the project are the ones with the knowledge and expertise to estimate its impacts but have inadequate incentives to generate the needed information, and when the public has no other means to discern that information. (36) The AIA is attractive because we are now in exactly such a situation with respect to algorithmic harms. (37) The public knows that there are potential harms associated with algorithmic systems but has neither the information nor the expertise to get into the weeds and discover what types of decisions in system design lead to particular types of problems. It will be difficult to address algorithmic harms more concretely or thoroughly without such information.

While AIAs may be a sound regulatory strategy in principle, a practical challenge arises when we consider that they will necessarily be implemented by the very firms building algorithmic technology. (38) The expertise and information contained within the industry itself is necessary for successful assessment of harms, and therefore the industry will have a hand in its own governance. This fact has certain consequences for the efficacy of the regulation in practice. Those consequences, and how to mitigate or address them, are the subject of this Article. It is necessary to understand the institutional forces at play in the organizations where systems will be built and impacts will be assessed. Only by understanding how the law is likely to be shaped and understood on the ground can we hope to use it to its fullest effect. This Article will argue in part that, once filtered through the institutional logics of the private sector, the AIA's first goal--to improve systems through better design--will only be effective in those organizations motivated by social obligation rather than mere compliance. However, the second goal--to produce the information needed for better policy and public understanding--is what really can make an AIA regime worthwhile, regardless of organizations' motivations.

That the current environment lends itself to an AIA approach does not mean that in a vacuum AIAs would be the most effective regulation of algorithmic systems possible. Quite the contrary. As this Article will detail, AIA regimes will likely not be effective enough to be the final word on policy. But, given the information disparities between developers on the one hand, and policymakers and the public on the other, regulation that can slow down the development process, create pathways for public input, and push information out to the public can be an important step toward both mitigating current harms and developing better, more concrete regulation in the future. There are certainly reasons to think we should skip this step entirely and immediately move toward more aggressive regulation. Such an approach may be called for in certain contexts, such as facial recognition, in which algorithmic systems pose unique dangers. (39) Additionally, as a matter of politics, reformers may get only one bite at the apple, which suggests that AIAs or any other stopgap regulation would in fact be a mistake. (40) These are important points, but it is also true that any regulation enacted without the information that an AIA regime would produce would be operating partly in the dark and therefore would result in certain unintended consequences and likely greater resistance from industry. Such a move might be the right one in the end, either because of the politics or for reasons that this Article discusses, including that the private sector can seriously undermine regimes of collaborative governance. (41) The aim of this Article, however, is to take seriously the practical reality that the private sector will be involved in any AIA regulation. If we are to decide whether AIAs are a good idea at all, or in case legislators move forward with the AIA idea as an achievable second-best approach, it will be important to understand what the best version of an AIA looks like.

The Article proceeds in four Parts. Part II introduces the AIA and explains why it is likely a useful approach. It offers three representative examples of algorithmic harm that have surfaced in scholarly literature and popular discourse: the biased hiring algorithm, the unexplained credit denial, and the unsafe medical AI. Each of these are real cases that implicate recognized algorithmic harms: discrimination, arbitrary decisionmaking, and physical injury. Part II demonstrates how current mechanisms of accountability for the relevant harm are difficult to apply, specifically due to the lack of knowledge the public has about the development processes. This is why it is necessary for regulation to focus on knowledge development before more substantive regulation can issue at a later time.

Part III briefly surveys different models of AIAs that have been proposed, as well as two alternatives: self-regulation and audits. These oversight mechanisms share many aspects but differ in important ways. Attending to these...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT