Machine learning and human capital complementarities: Experimental evidence on bias mitigation

Published date01 August 2020
AuthorRajshree Agarwal,Evan Starr,Prithwiraj Choudhury
DOIhttp://doi.org/10.1002/smj.3152
Date01 August 2020
RESEARCH ARTICLE
Machine learning and human capital
complementarities: Experimental evidence on
bias mitigation
Prithwiraj Choudhury
1
| Evan Starr
2
| Rajshree Agarwal
2
1
Harvard Business School, Boston,
Massachusetts
2
Robert H. Smith School of Business,
University of Maryland, College Park,
Maryland
Correspondence
Evan Starr, Robert H. Smith School of
Business, University of Maryland, College
Park, MD.
Email: estarr@rhsmith.umd.edu
Abstract
Research Summary:The use of machine learning
(ML) for productivity in the knowledge economy
requires considerations of important biases that may
arise from ML predictions. We define a new source of
bias related to incompleteness in real time inputs,
which may result from strategic behavior by agents.
We theorize that domain expertise of users can comple-
ment ML by mitigating this bias. Our observational
and experimental analyses in the patent examination
context support this conjecture. In the face of input
incompleteness,we find ML is biased toward finding
prior art textually similar to focal claims and domain
expertise is needed to find the most relevant prior art.
We also document the importance of vintage-specific
skills, and discuss the implications for artificial intelli-
gence and strategic management of human capital.
Managerial Summary:Unleashing the productivity
benefits of machine learning (ML) technologies in the
future of work requires managers to pay careful atten-
tion to mitigating potential biases from its use. One
such bias occurs when there is input incompleteness to
the ML tool, potentially because agents strategically
provide information that may benefit them. We dem-
onstrate that in such circumstances, ML tools can
make worse predictions than the prior technology vin-
tages. To ensure productivity benefits of ML in light of
potentially strategic inputs, our research suggests that
Received: 10 September 2018 Revised: 9 February 2020 Accepted: 14 February 2020 Published on: 7 April 2020
DOI: 10.1002/smj.3152
Strat Mgmt J. 2020;41:13811411. wileyonlinelibrary.com/journal/smj © 2020 John Wiley & Sons, Ltd. 1381
managers need to consider two attributes of human
capitaldomain expertise and vintage-specific skills.
Domain expertise complements ML by correcting for
the (strategic) incompleteness of the input to the ML
tool, while vintage-specific skills ensure the ability to
properly operate the technology.
KEYWORDS
bias, complementarities, domain expertise, human capital, machine
learning
1|INTRODUCTION
Artificial intelligence (AI) and machine learning (ML)where algorithms learn from existing
patterns in data to conduct statistically driven predictions and facilitate decisions (Brynjolfsson &
McAfee, 2014; Kleinberg, Lakkaraju, Leskovec, Ludwig, & Mullainathan, 2017)may well trans-
form the future of work, with questions regarding whether it would substitute or complement
human capital (Autor, 2015; Bughin et al., 2017; Frank et al., 2019). Despite the promise of ML in
increasing productivity, many firms have encountered significant challenges due to biases in
predictions,
1
often thought to result from biased training data and/or algorithms (Baer &
Kamalnath, 2017; Bolukbasi, Chang, Zou, Saligrama, & Kalai, 2016; Polonski, 2018).
2
In many important contexts, however, a third source of bias may arise because agents strategi-
cally alterthe input to the algorithm, perhapsbecause they stand to benefitfrom biased predictions.
For example, ML algorithms can speed up the reviewing of resumes in recruiting, or processingof
insurance claims.However, resumes and insurance claims are generated byapplicants who have a
strategic interest in positive outcomes. Can ML correct for such strategic behavior? Research in
adversarialML examines attempts to trickML technologies (Goodfellow, Shlens, &
Szegedy, 2014), and generally concludes thatit is challenging to adversarially trainthe ML tech-
nology to account for every possible input. The combination of strategically generated inputs and
imperfect adversarial training of MLcreates biased predictions that stemfrom what we term input
incompleteness. Accordingly, two important questions arise: How can firms mitigate such bias to
unlock the potentialof ML? And, how may human capital complementML to do so?
We examine answers to these questions in the context of ML technology used for patent
examination, a context rife with input incompleteness. Patent examiners face a time-consuming
challenge of accurately determining the novelty and nonobviousness of a patent application by
sifting through ever-expanding amounts of prior art.Moreover, patent applicants are permit-
ted by law to create hyphenated words and assign new meaning to existing words to accurately
reflect novel inventions (D'hondt, 2009; Verberne, D'hondt, Oostdijk, & Koster, 2010). However,
this freedom also allows patent applicants to strategically write their applications to enhance
1
The word biasevokes many connotations. We use bias to reflect inaccurate predictions (e.g., Types 1 and 2 errors).
2
Gender and racial biases resulted in Amazon discontinuing an ML-based hiring technology (Dastin, 2018), IBM and
Microsoft coming under fire for ML-based facial recognition (Buolamwini & Gebru, 2018), and the Apple credit card
scrutinized through a regulatory investigation (Knight, 2019).
1382 CHOUDHURY ET AL.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT