The Modern Lie Detector: Ai-powered Affect Screening and the Employee Polygraph Protection Act (eppa)
Published date | 01 June 2021 |
Date | 01 June 2021 |
NOTE
The Modern Lie Detector: AI-Powered Affect
Screening and the Employee Polygraph Protection
Act (EPPA)
COURTNEY HINKLE*
Predictive algorithms are increasingly being used to screen and sort
the modern workforce. The delegation of hiring decisions to AI-powered
software systems, however, will have a profound impact on the privacy of
individuals. This Note builds on the foundational work of legal scholars
studying the growing trend of algorithmic decisionmaking in recruiting
and hiring practices. However, this Note will differ from their analysis in
critical ways. Although this issue has primarily been studied through the
lens of federal antidiscrimination law and for the potential for algorith-
mic bias, this Note will explore how federal privacy law, namely the oft-
forgotten Employee Polygraph Protection Act (EPPA), offers a more ro-
bust regulatory framework.
This Note will specifically analyze the use of video-interviewing
screens that rely upon affect-recognition technology, which analyze an
applicant’s voice tonality, word choice, and facial movements. The cur-
rent vogue for AI-powered affect screening is, however, reminiscent of
an early period of employee screening tests: the lie detector. Congress
prohibited the use of lie detectors by employers in the 1980s. By embrac-
ing old analytical shortcuts, which purport to correlate psychophysiolog-
ical responses with desired character traits, namely honesty, this
growing industry is operating in violation of federal law. This Note will
also critique the limits of antidiscrimination law, data protection law,
and consumer protection law to address the scope of privacy harms
posed by these screens.
TABLE OF CONTENTS
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
I. THE CHALLENGE OF ALGORITHMIC HIRING SCREENS . . . . . . . . . . . . . . . . . 1205
* Georgetown University Law Center, J.D. expected 2021; The University of the South, B.A. 2012.
© 2021, Courtney Hinkle. I am grateful to Professor Julie E. Cohen for her invaluable guidance and
support in developing the paper that became this Note. I also want to acknowledge Jenny R. Yang and
Professor Danielle K. Citron for their thoughtful insights. Finally, I want to thank Orion de Nevers,
Anna Stacey, Maggie O’Leary, and all the Georgetown Law Journal editors and staff for their helpful
contributions.
1201
A. FROM JOB BOARDS TO ARTIFICIAL INTELLIGENCE (AI) . . . . . . . . . . . . 1206
B. CURRENT REGULATORY APPROACHES . . . . . . . . . . . . . . . . . . . . . . . . . 1216
1. Antidiscrimination Law. . . . . . . . . . . . . . . . . . . . . . . . . . 1216
2. Consumer Protection Law . . . . . . . . . . . . . . . . . . . . . . . . 1222
3. Data Protection Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
II. THE RISE (AND FALL) OF THE LIE DETECTOR TEST . . . . . . . . . . . . . . . . . . 1230
A. THE QUEST FOR THE PERFECT LIE DETECTOR . . . . . . . . . . . . . . . . . . . 1231
B. LIE DETECTORS: MYTHS AND CRITICISMS . . . . . . . . . . . . . . . . . . . . . . 1236
1. Lack of Scientific Validity. . . . . . . . . . . . . . . . . . . . . . . . 1236
2. Privacy Violations and Human Dignity . . . . . . . . . . . . . . 1239
C. A FEDERAL RESPONSE: THE EMPLOYEE POLYGRAPH PROTECTION ACT
(EPPA).................. ................................ 1242
III. AFFECT RECOGNITION AND THE EPPA: WHAT’S OLD IS NEW AGAIN . . . 1244
A. FROM WRITTEN “INTEGRITY TESTS” TO AI-POWERED AFFECT
SCREENING........................... ................... 1245
B. SAME THEORY, SAME CRITICISMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
1. Renewed Faith in Pseudoscience . . . . . . . . . . . . . . . . . . . 1247
2. Accelerating Privacy Harms . . . . . . . . . . . . . . . . . . . . . . 1249
3. Technological Solutionism . . . . . . . . . . . . . . . . . . . . . . . 1254
C. SAME RESULT: THE RETURN OF THE LIE DETECTOR . . . . . . . . . . . . . . 1257
CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
INTRODUCTION
Predictive algorithms are increasingly being used to screen and sort the modern
workforce.
1
In the brave new world of algorithmic hiring, artificial intelligence
and machine learning tools are used to determine an applicant’s overall fit and
likelihood of success for a particular role.
2
Some of these new tools hold the
1. Pauline T. Kim, Data-Driven Discrimination at Work, 58 WM. & MARY L. REV. 857, 857, 860
(2017); Claire Cain Miller, Can an Algorithm Hire Better Than a Human?, N.Y. TIMES (June 25, 2015),
https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.html.
2. See Kim, supra note 1, at 860. The term “artificial intelligence” (AI) is used to define various
computational techniques for automating intelligent behavior, which are often used to predict future
outcomes based on analysis of past data; however, “[t]here is no single definition of AI that is
1202 THE GEORGETOWN LAW JOURNAL [Vol. 109:1201
promise—and the peril—of translating the practice of using paper-and-pencil in-
tegrity tests into lines of code.
3
For example, video-interviewing screens that
incorporate affect- or emotion-recognition technology––which purports to sur-
face desirable character traits hidden in each applicant’s subconscious by study-
ing voice tonality, word choice, and facial movements––are increasingly among
the most popular digital hiring tools on the market.
4
Proponents of the technology
universally accepted by practitioners.” See COMM. ON TECH., EXEC. OFFICE OF THE PRESIDENT,
PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE 6–7 (2016), https://obamawhitehouse.
archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.
pdf [https://perma.cc/YHF3-FWH6] (providing examples of various definitions offered by experts).
These techniques include machine learning, deep learning, learning algorithms, and many other terms.
See id. at 8–9. For a more in-depth explanation, see Solon Barocas & Andrew D. Selbst, Big Data’s
Disparate Impact, 104 CALIF. L. REV. 671, 674 n.10 (2016), which defines an “algorithm” as “a formally
specified sequence of logical operations that provides step-by-step instructions for computers to act on
data and thus automate decisions.” See also Bernard Marr, What Is the Difference Between Artificial
Intelligence and Machine Learning?, FORBES (Dec. 6, 2016, 2:24 AM), https://www.forbes.com/sites/
bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/?sh=
30f8a05e2742 (defining AI as “the broader concept of machines being able to carry out tasks in a way that
we would consider ‘smart,’” whereas machine learning is “a current application of AI based around the
idea that we should really just be able to give machines access to data and let them learn for themselves”).
Notably, a deep dive into the differences between the various types of AI is not necessary for the purpose of
this Note.
3. Integrity tests have been used “for decades to measure candidates’ attitudes toward theft,
dishonesty, absenteeism, violence, drug use, alcohol abuse and other counterproductive behaviors.” Bill
Roberts, Your Cheating Heart, SOC’Y FOR HUM. RESOURCE MGMT. (June 1, 2011), https://www.shrm.
org/hr-today/news/hr-magazine/pages/0611roberts.aspx [https://perma.cc/CYA4-QDF5] (providing a
history of integrity and personality testing by employers). Written, paper-and-pencil honesty tests
became an increasingly popular tool for employers beginning in the late 1980s and early 1990s as a
replacement for the previously preferred testing tool (the polygraph). Katrin U. Byford, Comment, The
Quest for the Honest Worker: A Proposal for Regulation of Integrity Testing, 49 SMU L. REV. 329, 331
(1996). The tests consisted of multiple-choice questions that would ask “overt” honesty questions
(“How often do you tell the truth?”) or “veiled purpose” or “personality-based” questions (“True or
False: I like to take chances.”). Id. at 332–33.
4. See Lilah Burke, Your Interview with AI, INSIDE HIGHER ED (Nov. 4, 2019), https://www.
insidehighered.com/news/2019/11/04/ai-assessed-job-interviewing-grows-colleges-try-prepare-students
[https://perma.cc/SP68-D9AT]; Businesses Turning to AI for Job Interviews, CBS NEWS (Feb. 20,
2020), https://www.cbsnews.com/video/businesses-turning-to-ai-for-job-interviews/; Hilke Schellmann,
How Job Interviews Will Transform in the Next Decade, WALL ST. J. (Jan. 7, 2020, 9:58 AM), https://
www.wsj.com/articles/how-job-interviews-will-transform-in-the-next-decade-11578409136; Jessica
Stillman, Delta and Dozens of Other Companies Are Using AI and Face Scanning to Decide Whom to
Hire. Critics Call It “Digital Snake Oil,” INC. (Oct. 30, 2019), https://www.inc.com/jessica-stillman/
delta-ikea-goldman-sachs-are-using-ai-face-scanning-to-decide-whom-to-hire-critics-call-it-digital-
snake-oil.html.
Algorithms that use affect- and emotion-recognition technology—a subset of facial-recognition
technology—are designed to “‘read’ our inner emotions by interpreting physiological data such as the
micro-expressions on our face,” and the information is used to make “sensitive determinations about
who is . . . a ‘good worker.’” KATE CRAWFORD, ROEL DOBBE, THEODORA DRYER, GENEVIEVE FRIED,
BEN GREEN, ELIZABETH KAZIUNAS, AMBA KAK, VAROON MATHUR, ERIN MCELROY, ANDREA NILL
SA
´NCHEZ, DEBORAH RAJI, JOY LISI RANKIN, RASHIDA RICHARDSON, JASON SCHULTZ, SARAH MYERS
WEST & MEREDITH WHITTAKER, AI NOW INST., AI NOW 2019 REPORT 12 (2019), https://ainowinstitute.
org/AI_Now_2019_Report.pdf [https://perma.cc/5NP9-YGSQ].
Notably, in January 2021, HireVue—one of the most well-known vendors offering affect-recognition
video screens—announced it would be suspending the use of its software to analyze applicants’ facial
expressions to discern character traits. Will Knight, Job Screening Service Halts Facial Analysis of
2021] THE MODERN LIE DETECTOR 1203
To continue reading
Request your trial