Algorithmic Elections.

AuthorBender, Sarah M.L.

Artificial intelligence (AI) has entered election administration. Across the country, election officials are beginning to use AI systems to purge voter records, verify mail-in ballots, and draw district lines. Already, these technologies are having a profound effect on voting rights and democratic processes. However, they have received relatively little attention from AI experts, advocates, and policymakers. Scholars have sounded the alarm on a variety of "algorithmic harms" resulting from AI's use in the criminal justice system, employment, healthcare, and other civil rights domains. Many of these same algorithmic harms manifest in elections and voting but have been underexplored and remain unaddressed.

This Note offers three contributions. First, it documents the various forms of "algorithmic decisionmaking" that are currently present in U.S. elections. This is the most comprehensive survey of AI's use in elections and voting to date. Second, it explains how algorithmic harms resulting from these technologies are disenfranchising eligible voters and disrupting democratic processes. Finally, it identifies several unique characteristics of the U.S. election administration system that are likely to complicate reform efforts and must be addressed to safeguard voting rights.

TABLE OF CONTENTS INTRODUCTION I. ALGORITHMIC DECISIONMAKING AND ALGORITHMIC HARMS A. Key Technical Terms and Concepts B. How Algorithms Harm 1. Faulty Programming and Design 2. Faulty Uses 3. Proxy Discrimination 4. Lack of Transparency II. AUTOMATING ELECTION ADMINISTRATION A. Maintaining Voter Rolls B. Signature Matching C. Redistricting F. D. Other Potentially Impactful AI Developments 1. Political Advertising 2. Disinformation Campaigns 3. Election Hacking III. OVERCOMING BARRIERS TO PROGRESS AND REFORM A. Politics and the "Good Faith" Assumption B. Decentralization and Disuniformity C. Finding a Path Forward 520 CONCLUSION INTRODUCTION

In recent years, the potential for algorithms to make voting easier and elections fairer and more reliable has gained increased attention. Computer scientists have developed algorithms to make redistricting less partisan, which have been touted as a cure for gerrymandering. (1) Counties are using artificial intelligence technologies (AIs) to perform mobile-only elections, allowing voters to cast their ballots using a smartphone or other electronic device. (2) Others are piloting algorithmic tools that track voter data to ensure that no fraud or significant administrative errors occur. (3)

AI holds great promise. It can be used to automate a wide variety of processes and decisions that were previously performed by humans and are thus susceptible to error and inefficiencies. And unlike humans, algorithms cannot themselves engage in intentional discrimination. (4) As a result, they have the potential to improve traditional human decisionmaking and to render more objective and less discriminatory results. (5)

Unfortunately, this hope has not borne out in practice. Algorithms have instead proven to be "our opinions embedded in code." (6) Indeed, "[m]ounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing." (7) Prejudice can infect AIs and algorithms in a variety of ways, causing them to compound existing injustices and yield discriminatory results. For example, AI-generated recidivism scores used in Florida were almost twice as likely to falsely label Black defendants as future criminals, as compared to white defendants. (8)

Extensive scholarship has documented how AI is being used in the criminal justice, housing, education, employment, financial services, and healthcare domains, as well as the risks it poses to civil rights and civil liberties. (9) Relatively little attention has been given to its use in U.S. elections or its impact on voting rights, (10) however. This Note seeks to further that conversation.

This Note has two primary target audiences. The first is AI experts, legal scholars, policymakers, and advocates who are working to promote algorithmic accountability in other domains. I hope to persuade this group of the importance of addressing algorithmic harms in elections and voting--and to provide them with an initial framework for doing so effectively. The second target audience comprises public officials and voting rights advocates and experts who are working to improve our election systems but may be less familiar with AL My goal is to provide this group with a workable understanding of how AI may affect their work, as well as why such technology must be deployed cautiously.

Part I seeks to facilitate conversation between these two audiences by providing a brief primer on the technical concepts discussed in this Note and by relating the different types of "algorithmic harms" that scholars have identified in other domains that are relevant to elections and voting. Part II is the heart of the Note. It catalogs the different ways that election administrators use AI to make decisions and manage elections, as well as the algorithmic harms this may cause. This is the most comprehensive review of AI's use in elections and voting to date. (11) Finally, Part III identifies several unique characteristics of election administration in the United States and explains why these characteristics may complicate efforts to address algorithmic harms in this domain.

  1. ALGORITHMIC DECISIONMAKING AND ALGORITHMIC HARMS

    Not all members of this Note's target audiences are familiar with how AI and algorithms work, and some of the terms used in this Note have been defined in different ways. This Part seeks to establish a baseline understanding of how algorithmic decisionmaking (12) can produce inaccurate, biased, and unfair outcomes. Section LA defines the key technical terms used throughout this Note, as well as the scope of the technologies discussed in Part II. Section I.B describes different types of "algorithmic harms" that are relevant to elections and summarizes existing literature on how such harms occur and manifest in other civil rights domains.

    1. Key Technical Terms and Concepts

      This Note uses a variety of terms to refer to the emerging technologies revolutionizing election administration and other domains. These include "algorithms," "artificial intelligence," and "machine learning." Some authors have used the image of a Russian nesting doll to illustrate the relations between these terms--algorithms are the largest, outermost doll because, while all AI uses algorithms, not all algorithms constitute AL (13) Similarly, all machine learning involves AI, but not all AI involves machine learning. (14)

      Broadly speaking, an algorithm is "a finite series of well-defined, computer-implementable instructions" (15) used to process input data and generate certain outputs. (16) Today, nearly all software programs use some type of algorithm to solve problems and execute tasks. (17) Algorithms can be quite simple, like generating a Fibonacci sequence. (18) They can also be quite complex, like those that provide autonomous vehicles with driving instructions, identify abnormal X-rays and CT scans, or assign students to public schools. (19)

      Experts define AI in a variety of ways, but the term generally refers to machines that mimic human intelligence. (20) AI systems use algorithms to analyze text, data, images, and other inputs and make decisions about them in a way that is consistent with human decisionmaking. (21) AI's "ability to extract intelligence from unstructured data" is particularly impactful. (22) Vast amounts of data are generated daily, which, on their face, have little apparent meaning. (23) The goal of AI is to make sense of such data, identifying new patterns and determining how best to act upon them. (24)

      Machine learning is a form of AI, which relies on algorithms that can learn from data without rules-based programming. (25) These learning algorithms can "classify data, pictures, text, or objects without detailed instruction and ... learn in the process so that new pictures or objects can be accurately identified based on that learned information." (26) Machine-learning technologies thus depend less on human programming and more on algorithms that can learn from data as they progress, improving at tasks with experience. (27)

      Scientists "train" machine-learning algorithms to do particular tasks by feeding the algorithm data for which the "target variable," or outcome of interest, is known. (28) The algorithm derives from these data "complex statistical models linking the input data with which it has been provided to predictions about the target variable." (29) For example, to train an algorithm to identify malignant tumors, scientists will show it a large number of tumor X-rays or scans and indicate which are benign and which are cancerous. (30) The algorithm will begin to pick up on patterns in the tumor images, allowing it to distinguish between benign and malignant tumors in new images. (31) Thus, the data used to train machine-learning algorithms--and the process by which scientists label the data--have a significant impact on the outcomes they generate. (32)

    2. How Algorithms Harm

      Because AIs do not have any conscious awareness or intentions that are independent from those embedded within their code, "most commentators and courts believe that an AI cannot itself engage in intentional discrimination." (33) Nevertheless, algorithmic decisionmaking can lead to a number of harmful outcomes, which are well documented in other civil rights domains and are likewise present in election administration. Faulty training and poor design can cause algorithmic systems to render inaccurate and biased results. But even well-designed AIs may be misused or may "proxy discriminate." Finally, these technologies' opacity and complexity can exacerbate each of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT