Understanding algorithm aversion: When is advice from automation discounted?

AuthorAndrew Prahl,Lyn Van Swol
Published date01 September 2017
Date01 September 2017
DOIhttp://doi.org/10.1002/for.2464
RESEARCH ARTICLE
Understanding algorithm aversion: When is advice from
automation discounted?
Andrew Prahl | Lyn Van Swol
Department of Communication Arts,
University of WisconsinMadison,
Wisconsin, USA
Correspondence
Andrew Prahl, Department of
Communication Arts, University of
WisconsinMadison, Madison, WI 53706,
USA.
Email: aprahl@wisc.edu
Abstract
Forecasting advice from human advisors is often utilized more than advice from
automation. There is little understanding of why algorithm aversionoccurs, or
specific conditions that may exaggerate it. This paper first reviews literature from
two fieldsinterpersonal advice and humanautomation trustthat can inform
our understanding of the underlying causes of the phenomenon. Then, an experi-
ment is conducted to search for these underlying causes. We do not replicate the
finding that human advice is generally utilized more than automated advice. How-
ever, after receiving bad advice, utilization of automated advice decreased signifi-
cantly more than advice from humans. We also find that decision makers describe
themselves as having much more in common with human than automated advisors
despite there being no interpersonal relationship in our study. Results are discussed
in relation to other findings from the forecasting and humanautomation trust fields
and provide a new perspective on what causes and exaggerates algorithm aversion.
KEYWORDS
advice, algorithmaversion, automation, computers, trust
1|INTRODUCTION
Computers, robots, algorithms, and other forms of automa-
tion are quickly becoming a fundamental part of many deci-
sionmaking processes in both personal and professional
contexts. From forecasting product sales (Fildes, Goodwin,
Lawrence, & Nikolopoulos, 2009) to informing medical and
management decisions (Esmaeilzadeh, Sambasivan, Kumar,
& Nezakati, 2015; Inthorn, Tabacchi, & Seising, 2015; Prahl,
Dexter, Braun, & Van Swol, 2013), people frequently seek
and receive advice from nonhuman (automation) sources
when facing important decisions. Yet, despite seeking advice
from automation, decision makers frequently discount advice
obtained from it, especially when compared to advice from a
human advisor (Önkal, Goodwin, Thomson, Gönül, &
Pollock, 2009).
The irrational discounting of automation advice has long
been known and a source of the spirited clinical versus actu-
arialdebate in clinical psychology research (Dawes, 1979;
Meehl, 1954). Recently, this effect has been noted in forecast-
ing research (Önkal et al., 2009) and has been called algo-
rithm aversion (Dietvorst, Simmons, & Massey, 2015). A
developing area of research is trying to identify interventions
that increase trust in automation advice, such as providing
confidence intervals or allowing human judges to slightly
modify automation forecasts (Dietvorst, Simmons, &
Massey, 2016; Goodwin, Gönül, & Önkal, 2013). This
research is important, but more research is needed on the
underlying psychological processes that affect the
discounting of automation advice, especially in comparison
to human advice. This paper examines trust as a factor that
may underlie differences in use of advice. First, we summa-
rize literature in two related fields that can inform forecasting
research about algorithm aversion: interpersonal advice and
humanautomation trust.
In addition to providing a new perspective on automated
advice, both the advice field and human factors provide the-
oretical frameworks that we use to generate hypotheses and
Received: 20 May 2016 Revised: 1 December 2016 Accepted: 31 January 2017
DOI: 10.1002/for.2464
Journal of Forecasting. 2017;36:691702. Copyright © 2017 John Wiley & Sons, Ltd.wileyonlinelibrary.com/journal/for 691

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT