Accountable algorithms.

AuthorKroll, Joshua A.

Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities-subtler and more flexible than total transparency-to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also-in certain cases-the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the "brain" of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward.

The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department's diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountability.

INTRODUCTION I. HOW COMPUTER SCIENTISTS BUILD AND EVALUATE SOFTWARE A. Assessing Computer Systems 1. Static Analysis: Review from Source Code Alone 2. Dynamic Testing: Examining a Program's Actual Behavior 3. The Fundamental Limit of Testing: Noncomputability B. The Importance of Randomness II. DESIGNING COMPUTER SYSTEMS FOR PROCEDURAL REGULARITY A. Transparency and Its Limits B. Auditing and Its Limits C. Technical Tools for Procedural Regularity 1. Software Verification 2. Cryptographic Commitments 3. Zero-Knowledge Proofs 4. Fair Random Choices D. Applying Technical Tools Generally E. Applying Technical Tools to Reform the Diversity Visa Lottery 1. Current DVL Procedure 2. Transparency Is Not Enough 3. Designing the DVL for Accountability III. DESIGNING ALGORITHMS TO ASSURE FIDELITY TO SUBSTANTIVE POLICY CHOICES A. Machine Learning, Policy Choices, and Discriminatory Effects B. Technical Tools for Nondiscrimination 1. Learning from Experience 2. Fair Machine Learning 3. Discrimination, Data Use, and Privacy C. Antidiscrimination Law and Algorithmic Decisionmaking 1. Ricci v. DeStefano: The Tensions Between Equal Protection, Disparate Treatment, and Disparate Impact 2. Ricci Impels Designing for Nondiscrimination IV. FOSTERING COLLABORATION ACROSS COMPUTER SCIENCE, LAW, AND POLICY A. Recommendations for Computer Scientists: Design for After-the-Fact Oversight B. Recommendations for Lawmakers and Policymakers 1. Reduced Benefits of Ambiguity 2. Accountability to the Public 3. Secrets and Accountability INTRODUCTION

Many important decisions that were historically made by people are now made by computer systems (1): votes are counted; voter rolls are purged; loan and credit card applications are approved; (2) welfare and financial aid decisions are made; (3) taxpayers are chosen for audits; citizens or neighborhoods are targeted for police scrutiny; (4) air travelers are selected for search; (5) and visas are granted or denied. The efficiency and accuracy of automated decisionmaking ensures that its domain will continue to expand. Even mundane activities now involve complex computerized decisions: everything from cars to home appliances now regularly executes computer code as part of its normal operation.

However, the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decisionmakers. Many observers have argued that our current frameworks are not well-adapted for situations in which a potentially incorrect, (6) unjustified, (7) or unfair (8) outcome emerges from a computer. Citizens, and society as a whole, have an interest in making these processes more accountable. If these new inventions are to be made governable, this gap must be bridged.

In this Article, we describe how authorities can demonstrate-and how the public at large and oversight bodies can verify-that automated decisions comply with key standards of legal fairness. We consider two approaches: ex ante approaches aiming to establish that the decision process works as expected (which are commonly studied by technologists and computer scientists), and ex post approaches once decisions have been made, such as review and oversight (which are common in existing governance structures). Our proposals aim to use the tools of the first approach to guarantee that the second approach can function effectively. Specifically, we describe how technical tools for verifying the correctness of computer systems can be used to ensure that appropriate evidence exists for later oversight.

We begin with an accessible and concise introduction to the computer science concepts on which our argument relies, drawn from the fields of software verification, testing, and cryptography. Our argument builds on the fact that technologists can and do verify for themselves that software systems work in accordance with known designs. No computer system is built and deployed in the world shrouded in total mystery. (9) While we do not advocate any specific liability regime for the creators of computer systems, we outline the range of tools that computer scientists and other technologists already use, and show how those tools can ensure that a system meets specific policy goals. In particular, while some of these tools provide assurances only to the system's designer or operator, other established methods could be leveraged to convince a broader audience, including regulators or even the general public.

The tools available during the design and construction of a computer system are far more powerful and expressive than those that can be bolted on to an existing system after one has been built. We argue that, in many instances, designing a system for accountability can enable stakeholders to reach accountability goals that could not be achieved by imposing new transparency requirements on existing system designs.

We show that computer systems can be designed to prove to oversight authorities and the public that decisions were made under an announced set of rules consistently applied in each case, a condition we call procedural regularity. The techniques we describe to ensure procedural regularity can be extended to demonstrate adherence to certain kinds of substantive policy choices, such as blindness to a particular attribute (e.g., race in credit underwriting). Procedural regularity ensures that a decision was made using consistently applied standards and practices. It does not, however, guarantee that such practices are themselves good policy. Ensuring that a decision procedure is well justified or...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT