SYSTEMATIZING DISCRIMINATION: AI VENDORS & TITLE VII ENFORCEMENT.

AuthorJones, Colin Clemente

INTRODUCTION 236 I. ARTIFICIAL INTELLIGENCE AT WORK 240 A. The Building Blocks of Machine Learning 241 B. Human-Selected Data Points 242 1. Target Criteria 242 2. Training Data 243 3. Candidate Inputs 243 C. The Trend of AI Hiring 244 II. THE NORMATIVE PROBLEMS OF AI-DRIVEN DISCRIMINATION 246 A. AI Can Make Discrimination More Common 247 B. AI May Conceal Discrimination When It Occurs 247 C. AI May Limit Antidiscrimination Remedies 248 D. AI Can Systematize Harm 248 III. THE LEGAL SOLUTIONS & LIMITS 250 A. Disparate Treatment Liability 251 B. Disparate Impact Liability 253 1. The Logic of Disparate Impact 254 2. Prima Facie Case of Disparate Impact 256 3. The Business Necessity Defense 257 a. Target Criteria 258 b. Screening Proxy 259 C. The Limits of Affirmative Action 261 IV. ROUTES TO ROBUST ENFORCEMENT 262 A. Enjoin the Use of [Invalidated Vendor Tools 262 B. Amend Title VII to Address Unique Problems of AI Vendors 263 CONCLUSION 265 INTRODUCTION

Amazon decided in 2014 to use its extensive technical expertise, vast troves of data, and functionally unlimited resources to automate the recruitment and evaluation of job applicants. (1) A team of a dozen engineers built hundreds of machine learning models, each focused on a different set of job functions. (2) In almost no time at all, the models were able to score candidate resumes based on more than 50,000 terms that showed up in the resumes of employees hired in the previous decade. (3) Things seemed great until Amazon's engineers realized, a year into the project, that the program had "taught itself that male candidates were preferable." (4) The models prioritized candidates with resumes that used words disproportionately found on men's resumes (e.g., "executed" and "captured") and deprioritized candidates with resumes that included the word "women's" (e.g., "women's chess club captain") or featured all-women's colleges. (5)

In the year after Reuters broke the story about Amazon's failed attempt at automated hiring, dozens of think pieces debated the merits ad nauseum, with one concluding that "if a company like Amazon can't pull [automated hiring] off without problems, it's difficult to imagine that less sophisticated companies can." (6) And yet, every single day, companies far less resourced than Amazon rely on artificial intelligence ("AI") to make hiring and firing decisions. (7) Most of them, including a third of Fortune 100 companies, are aided in this endeavor by an ever-expanding industry of AI hiring vendors. (8) Unfortunately, there is little to suggest that AI vendors have solved the problems that Amazon could not. (9) Instead, each vendor sells its AI tool, including whatever biases may be baked into its models to dozens, hundreds, or thousands of customers. In doing so, each vendor replicates and systematizes discrimination in an unprecedented manner.

Title VII was designed to "strike at the entire spectrum" of discriminatory conduct in employment, (10) but it appears to be falling short in this context, as no plaintiff has successfully brought an employment discrimination suit based on the use of AI hiring software. Nonetheless, the technology has generated considerable attention from activists, (11) government agencies, (12) media, (13) and scholars. (14)

This Comment proceeds in four parts to analyze the normative problems, review the legal frameworks, and propose potential solutions to address AI-driven employment discrimination. First, because there are as many definitions of AI as there are law review articles discussing AI, Part I defines AI in the employment context before describing how AI tools are currently being used in the workplace.

The remaining parts address three key gaps in the algorithmic discrimination scholarship to date. First, existing antidiscrimination scholarship on AI in hiring focuses, explicitly or implicitly, on two chief harms of AI tools: (1) making employment decisions based on job-irrelevant correlations in a way that results in discriminatory outcomes and (2) creating new barriers for legal redress. (15) These concerns are substantial and well-documented, but the scholarship overlooks a third significant harm of AI hiring: systematizing discrimination and thereby excluding people or communities from large swaths of the labor market. Although the systematizing effect of AI has not been discussed at length in the employment discrimination literature, scholars have identified it as a distinctive concern of AI-driven decisionmaking in other contexts. (16) In Part II, I seek to build on these scholars' work by considering the normative problem of systematicity in the context of employment discrimination, where the concern is particularly pressing.

Second, most of the current proposals for reform focus on improvements to the technology or fundamental changes to antidiscrimination law. (17) I suggest that this is because many scholars underestimate the potential effectiveness of existing disparate impact doctrines. (18) Thus, in Part III, I examine the caselaw and argue that Al-driven discrimination can be effectively challenged under a disparate impact theory.

Third, scholarship to date has generally set aside the question of third-party liability. (19) However, if one is concerned about systematicity, third-party liability becomes a central focus, because few actors have a bigger role in systematizing employment discrimination than the vendors who create and sell AI hiring tools to hundreds of employers. Thus, even if current disparate impact doctrines are adequate to hold employers accountable for AI-driven discrimination, the lack of liability for most third parties under Title VII and other nondiscrimination statutes presents a considerable problem. In Part IV, I propose strategies that the EEOC could pursue immediately, as well as long-term reforms Congress should enact, to address this problem.

Finally, a note on terminology. Resolving the thorny debates around the definitions of "bias" and "discrimination" is beyond my scope here. Nonetheless, for a working definition, I embrace Pauline Kim's conception of "classification bias," which she defines as occurring "when employers rely on classification schemes, such as data algorithms, to sort or score workers in ways that worsen inequality or disadvantage along the lines of race, sex, or other protected characteristics." (20) Similarly, I am fundamentally concerned with an antisubordination theory of discrimination, which flows intuitively from Kim's use of classification bias. (21) Accordingly, when I discuss biased data or technology throughout this Comment, I am not using that term in a technical sense to suggest that algorithms are statistically unsound; instead, I'm referring to a tendency to further the subordination of marginalized groups. (22)

  1. ARTIFICIAL INTELLIGENCE AT WORK

    The design and operation of artificial intelligence involves considerable complexity, which is undergirded by robust and technical scholarly literature, most of which is beyond the scope of this Comment. However, identifying basic definitions, building blocks, and human decision points is essential to understanding both how AI tools facilitate discrimination and how the law can address such harms. At the broadest level, Joshua Kroll's straightforward maxim is compelling: "AI is just automation." (23) The AI tools used in hiring are generally built with a combination of algorithms, machine learning, and training datasets selected by employers and vendors to automate employment processes previously performed by humans.

    1. The Building Blocks of Machine Learning

      First, algorithms are an essential element of all AI tools. (24) An algorithm is a formula or set of rules dictating "procedures for transforming input data into a desired output, based on specified calculations." (25) Algorithms aren't new and don't necessarily require a computer; they include "simple point-based scoring systems [that] are used... to automate credit decisions, rate recidivism risk, and make clinical medical decisions such as prioritizing vaccine administration in the COVID-19 pandemic response." (26) In practice, most of the AI tools discussed here use big data and machine learning and thus involve considerably more complexity than point-based scoring systems.

      Big data and machine learning have fundamentally transformed algorithms from simple formulas into complex and opaque decisionmaking systems. (27) Big data can refer to a large corpus of data or the process of data mining to gather such a corpus. Through machine learning, a software program uses that data to develop a predictive algorithm, which can then be applied to new data to predict a given outcome, like employment success. (28)

      In the end, AI tools developed through machine learning involve two algorithms. To create the first algorithm, developers give the machine learning program instructions about what kind of predictive algorithm to develop. These instructions include the outcome to be predicted (the "target criteria" or "target variable"), the input variables for the program to draw on, and the scope of training or baseline data. (29) Armed with these instructions, the machine learning program explores the training data and creates the second algorithm--the screening algorithm. (30) This screening algorithm consists of a set of rules that the machine has inferred from the patterns it observed in the training data: "they are, quite literally, rules learned by example." (31) And those rules attempt to predict target criteria based on (superficially unrelated) candidate attributes. (32)

    2. Human-Selected Data Points

      Relying on a "veneer of high-tech objectivity," (33) AI tools often "enjoy an undeserved assumption of fairness or objectivity" (34) But such deference overlooks the key decisions humans make at several points in the development and implementation of AI tools. Solon Barocas and Andrew Selbst highlight three key data points...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT