Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission.

AuthorSlaughter, Rebecca Kelly
PositionInternet Service Providers Digital Future Whitepaper & Yale Journal of Law & Technology Special Publication

Contents Algorithms and Economic Justice I. Introduction II. Algorithmic Harms III. Using the FTC's Current Authorities to Better Protect Consumers IV. New Legislative and Regulatory Solutions V. Conclusion Acknowledgements The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: from facial recognition to deepfake technology to criminal justice and health care, their applications are seemingly endless. Across these contexts, the story of applied algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved in many applications of these technologies, the stakes are often incredibly high.

As an FTC Commissioner, I aim to promote economic and social justice through consumer protection and competition law and policy. In recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. This article describes harms caused by algorithmic decision-making in the high-stakes spheres of employment, credit, health care, and housing, which profoundly shape the lives of individuals. These harms are often felt most acutely by historically disadvantaged populations, especially Black Americans and other communities of color. And while many of the harms I describe are not entirely novel, AI and algorithms are especially dangerous because they can simultaneously obscure problems and amplify them--all while giving the false impression that these problems do not or could not possibly exist.

This article offers three primary contributions to the existing literature. First, it provides a baseline taxonomy of algorithmic harms that portend injustice, describing both the harms themselves and the technical mechanisms that drive those harms. Second, it describes my view of how the FTC's existing tools--including section 5 of the FTC Act, the Equal Credit Opportunity Act, the Fair Credit Reporting Act, the Children's Online Privacy Protection Act, and market studies under section 6(b) of the FTC Act--can and should be aggressively applied to thwart injustice. And finally, it explores how new legislation or an FTC rulemaking under section 18 of the FTC Act could help structurally address the harms generated by algorithmic decision-making.

  1. Introduction

    The proliferation of artificial intelligence and algorithmic decision-making (1) in recent years has shaped myriad aspects of our society. The applications of these technologies are innumerable, from facial recognition to deepfake technology, criminal justice, and health care. Across these contexts, the story of algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved, the stakes are high for consumers, innovators, and regulators.

    Algorithmic decision-making, and the AI that fuels it, could realize its promise of promoting economic justice by distributing opportunities more broadly, resources more efficiently, and benefits more effectively. Pairing dramatically deeper pools of data with rapidly advancing machine-learning technology might yield substantial benefits for consumers, including by potentially mitigating the pervasive biases that infect human decision-making. (2) When used appropriately and judiciously, algorithms have also transformed access to educational opportunities (3) and improved health outcomes through improved diagnostic rates and care adjustments. (4)

    But the potentially transformative power of algorithmic decision-making also risks serious harm if misused. In the criminal justice system, for example, commentators note that algorithms and AI contribute to over-surveillance, (5) wrongful detainment and arrest, (6) and biased risk assessments used to determine pre-trial status and even sentencing. (7) Mounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing. (8)

    The COVID-19 pandemic and its attendant social and economic fallout underscore the incredible stakes of the decisions we now delegate to technology. Even as unemployment has soared, firms increasingly use algorithms to help make employment decisions, (9) notwithstanding the questions that swirl about their compliance with nondiscrimination law. (10) Likewise, opaque algorithms used to select who receives COVID-19 vaccinations have resulted in wide distributional disparities (11) and perverse outcomes. (12)

    As a Commissioner at the Federal Trade Commission--an agency whose mission is to protect consumers from unfair or deceptive practices and to promote competition in the marketplace--I have a front-row seat to the use and abuse of AI and algorithms. In this role, I see firsthand that the problems posed by algorithms are both nuanced and context-specific. Because many of the flaws of algorithmic decision-making have long-standing analogs, related to both human decision-making and other technical processes, the FTC has a body of enforcement experience from which we can and should draw.

    This article utilizes this institutional expertise to outline the harms of applied algorithms and AI as well as the tools the FTC has at its disposal to address them, offering three primary contributions to the existing literature. First, it provides a baseline taxonomy of some of the algorithmic harms that threaten to undermine economic and civil justice. (13) I identify three ways in which flaws in algorithm design can produce harmful results : faulty inputs, faulty conclusions, and failure to adequately test. But not all harmful consequences of algorithms stem from design flaws. Accordingly, I also identify three ways in which sophisticated algorithms can generate systemic harm: by facilitating proxy discrimination, by enabling surveillance capitalism, (14) and by inhibiting competition in markets. In doing so, I show that at several stages during the design, development, and implementation of algorithms, failure to closely scrutinize their impacts can drive discriminatory outcomes or other harms to consumers.

    Second, this article describes my view of how the FTC's existing toolkit--including section 5 of the FTC Act, the Equal Credit Opportunity Act (ECOA), and the Fair Credit Reporting Act (FCRA)--can and should be aggressively applied to defend against these threats. For example, I argue that we should encourage non-mortgage creditors to collect demographic data in compliance with ECOA's self-testing safe harbor to assess existing algorithms for indicia of bias. I also discuss algorithmic disgorgement, an innovative and promising remedy the FTC secured in recent enforcement actions. Finally, in this section I identify some of the limitations on the reach of our existing enforcement tools.

    Those limitations tie directly to this article's third contribution: I explore how FTC rulemaking under section 18 of the FTC Act or new legislation could help more effectively address the harms generated by AI and algorithmic decision-making. I hope to draw the attention and ingenuity of the interested public to the challenges posed by algorithms so that we can work together on creating an enforcement regime that advances economic justice and equity.

    Ultimately, I argue that new technology is neither a panacea for the world's ills nor the plague that causes them. In the words of MIT-affiliated technologist R. David Edelman, "AI is not magic ; it is math and code." (15) As we consider the threats that algorithms pose to justice, we must remember that just as the technology is not magic, neither is any cure to its shortcomings. It will take focused collaboration between policymakers, regulators, technologists, and attorneys to proactively address this technology's harms while harnessing its promise.

    This article proceeds in three sections. Section II outlines the taxonomy of harms caused by algorithmic decision-making. Section III outlines the FTC's existing toolkit for addressing those harms, the ways we can act more comprehensively to improve the efficacy of those tools, and the limitations on our authority. Finally, Section IV discusses new legislation and regulation aimed at addressing algorithmic decision-making more holistically.

  2. Algorithmic Harms

    A taxonomy of algorithmic harms, describing both the harms themselves and the technical mechanisms that drive them, is a useful starting point. This section is divided into two subparts. The first addresses three flaws in algorithm design that frequently contribute to discriminatory or otherwise problematic outcomes in algorithmic decision-making: faulty inputs, faulty conclusions, and failure to adequately test. The second subpart describes three ways in which even sophisticated algorithms still systemically undermine civil and economic justice. First, algorithms can facilitate discrimination by enabling the use of facially neutral proxies to target people based on protected characteristics. Second, the widespread application of algorithms both fuels and is fueled by surveillance capitalism. Third, sophisticated and opaque use of algorithms can inhibit competition and harm consumers by facilitating anticompetitive conduct and enhancing market power.

    These six different types of algorithmic harms often work in concert--with the first set often directly enabling the second--but before considering their interplay, it is helpful to describe them individually. Of course, the harms enumerated herein are not, and are not intended to be, an exhaustive list of the challenges posed by algorithmic decision-making. This taxonomy, however, does help identify some of the most common and pervasive problems that invite enforcement and regulatory intervention, and therefore is a helpful framework for consideration of potential enforcement approaches.

    1. Algorithmic Design...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT