AN AUDITING IMPERATIVE FOR AUTOMATED HIRING SYSTEMS.

AuthorAjunwa, Ifeoma

TABLE OF CONTENTS I. INTRODUCTION 622 II. AUTOMATED HIRING AS BUSINESS PRACTICE 631 A. The Business Case 632 B. How Automated Is Automated Hiring? 634 C. Potential for Misuse 635 III. EX MACHINA: TECHNO-SOLUTIONIST APPROACHES 640 A. Humans Conform to the Machine 641 B. Algorithms to the Rescue 642 C. The Perils of Techno-Solutionism 645 IV. DO EMPLOYMENT LAWS ADEQUATELY ADDRESS AUTOMATED HIRING? 646 A. The Uphill Climb for Disparate Impact Claims 647 B. Intellectual Property Law and the CFAA 650 C. Recognizing an Affirmative Duty 652 V. A HYBRID APPROACH 659 A. Internal Auditing as Corporate Social Responsibility 661 1. Tear-off Sheets: What Information is Needed for Verification? 662 2. Enhancing Applicant Selection: What Standards Should Apply? 664 3. The Benefits of Internal Audits 665 B. External Auditing: The Fair Automated Hiring Mark 666 1. The Pros and Cons of a Governmental Certifying System 669 2. The Pros and Cons of a Third-Party Non- Governmental Certifying System 670 C. Collective Bargaining 674 1. Data Digested and Determining Probative Evaluation Criteria 674 2. Data End Uses and Fairness by Design 678 3. Data Control and Portability 680 4. Preventing "Algorithmic Blackballing" 681 D. The Employer's Burden 683 VI. CONCLUSION 684 APPENDIX 685 I. INTRODUCTION

Imagine this scenario: A woman seeking a retail job is informed that the job can only be applied for online. The position is a salesclerk for a retail company with store hours from 9:00 AM to 9:00 PM. She is interested in the morning and afternoon hours, as she has children who are in school until 3:00 PM. When completing the application, she reaches a screen where she is prompted to register her hours of availability. She enters 9:00 AM to 3:00 PM, Monday through Friday. However, when she hits the button to advance to the next screen, she receives an error message indicating that she has not completed the current section. She refreshes her screen, she restarts her computer, and still the same error message remains. Finally, in frustration, she abandons the application. Compare the above to this second scenario: A fifty-three-year-old man is applying for a job that requires a college degree. But when he attempts to complete the application online, he finds that the drop-down menu offers only college graduation dates that go back to the year 2000. The automated hiring platform will, in effect, exclude many applicants who are older than forty years old. If the man also chooses to forgo the application like the woman in the previous scenario, the automated hiring system may not retain any record of the two failed attempts to complete the job application. (1)

The vignettes above reflect the real-life experiences of job applicants who must now contend with automated hiring systems in their bid for employment. (2) These stories illustrate the potential for automated hiring systems to discreetly and disproportionately cull the applications of job seekers who are from legally protected classes. (3) Given that legal scholars have identified a "bias in, bias out" problem for automated decision-making. (4) Automated hiring as a socio-technical trend challenges the American bedrock ideal of equal opportunity in employment, (5) as such automated practices may not only be deployed to exclude certain categories of workers but may also be used to justify the inclusion of other classes as more "fit" for the job. (6) This is a cause for the legal concern that algorithms may be used to manipulate the labor market in ways that negate equal employment opportunity. (7) This concern is further exacerbated given that nearly all Fortune 500 companies now use algorithmic recruitment and hiring tools. (8) Algorithmic hiring has also saturated the low-wage retail market, with the top twenty Fortune 500 companies, which are mostly retail and commerce companies that boast large numbers of employees, almost exclusively hiring through online platforms. (9)

Although it is undeniable that there could be tangible economic benefits of adopting automated decision-making, (10) the received wisdom of the objectivity of automated decision-making, coupled with an unquestioning acceptance of the results of algorithmic decision-making, (11) have allowed hiring systems to proliferate without adequate legal oversight. As Professor Margot Kaminski notes, addressing algorithmic decision-making concerns requires both individual and systemic approaches. (12) Currently, the algorithmic decisions made in the private sector are largely unregulated, and Kaminski argues for a collaborative approach to governance that could satisfy both individual and collective concerns:

Collaborative governance is a middle ground, a third way, that aims to harness the benefits of self-regulation without its pitfalls. The government stays significantly involved as a backdrop threat to nudge private sector involvement, as a forum for convening and empowering conflicting voices, as an arbiter or certifier in the name of the public interest, and as a hammer that can come down to enforce compliance. (13) Thus, the goal of this Article is neither to argue against or for the use of automated decision-making in employment, nor is it to examine whether automated hiring systems are better than humans at making hiring decisions. For antidiscrimination law, the efficacy of any particular hiring system is a secondary concern to ensuring that any such system does not unlawfully discriminate against protected categories. (14) Therefore, my aim is to suggest collaborative regulatory regimes for automated hiring systems that will ensure that any benefits of automated hiring are not negated by (un)intended outcomes, such as unlawful discrimination on the basis of protected characteristics.

Furthermore, this Article owes a debt to Professor Katherine Strandburg, who notes that explainability has important normative and practical implications for system design. (15) Specifically, Strandburg notes that inscrutable decision tools disrupt the explanatory flows among the multiple actors responsible for determining goals, selecting decision criteria, and applying those criteria. (16) Thus, seeking the explainability of automated decisions is not just for the benefit of the decision subjects, but really for the benefit of all interested in the outcomes. (17)

In a similar vein, Talia Gillis and Josh Simons have argued against focusing on accountability of individual actors. (18) Rather, they note that "[t]he focus on individual, technical explanations... [is] driven by an uncritical bent towards transparency." (19) Instead, they advocate that "[i]nstitutions should justify their choices about the design and integration of machine learning models not to individuals, but to empowered regulators or other forms of public oversight bodies." (20)

Furthermore, Professor Pauline Kim makes the case that the law does allow for the revision of algorithmic systems to address bias. (21) Thus, she argues that the law permits using auditing to detect and correct for discriminatory bias. (22) Kim argues that auditing should be an important strategy for examining whether the outcomes of automated hiring systems comport with equal opportunity in employment guidelines. (23)

The insights of these legal scholars and others (24) form the foundation for my contribution in this Article, in which I posit an auditing imperative for automated hiring systems. Building on Professor Kim's essay, I argue not just that the law allows for the audits, but that the spirit of antidiscrimination law requires it. That is, I follow the footsteps of legal scholars like Professors Richard Thompson Ford, (25) James Grimmelmann, (26) Robert Post, (27) David Benjamin Oppenheimer, (28) and Noah Zatz, (29) to argue that employment antidiscrimination law imposes an affirmative duty of care on employers to ensure that they are avoiding practices that would constrain equal opportunity in employment. Thus, I argue, that when employers choose to use algorithmic systems, fulfilling their duty of care entails regular audits of those systems. In turn, audits necessitate the record-keeping and data retention mandates that I also propose in this Article.

I note here that automated hiring systems exist in a plethora of forms, with each iteration presenting distinct legal issues. This is because each form of automated hiring does not offer the same level of automation. Ranging from the least automated (which allows for the most human intervention) to the most automated (which allows for the least human intervention), there are: applicant tracking systems ("ATS"), which employ algorithms that parse resumes for keywords; (30) machine learning algorithms that could be trained on selecting resumes and deployed to rank them in hundreds or thousands; (31) and video screening systems, such as HireVue, which provide automated assessments based on facial analysis and vocal indications. (32) To offer a full portrait of the proliferation of automated hiring platforms and associated legal issues, the Appendix offers a survey of extant automated hiring systems in which I detail a sampling of the companies currently using those systems, as well as their potentially problematic features. This Article does not delve into the specific legal issues associated with each iteration of automated hiring system; rather, it recognizes that all job applications share several common legal problems regardless of which iteration of automated hiring system applies, and that the greatest obstacle is meeting the standard of proof for employment discrimination.

But first, consider the growing trend towards automated video interview assessment as perhaps the most extreme of automated hiring systems. According to one article, one of the leaders in the automated video interview market, HireVue, "uses AI to analyze word choice, tone, and facial movement of job applicants who do video interviews." (33) For some...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT