Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision‐Making

Published date01 March 2023
AuthorStephan Grimmelikhuijsen
Date01 March 2023
DOIhttp://doi.org/10.1111/puar.13483
RESEARCH ARTICLE
Explaining Why the Computer Says No: Algorithmic
Transparency Affects the Perceived Trustworthiness of
Automated Decision-Making
Stephan Grimmelikhuijsen
Utrecht University School of Governance
Funding information
Netherlands Organization for Scientific Research
(NWO), Grant/Award Number: VENI-451-15-024
Abstract
Algorithms based on Artificial Intelligence technologies are slowly transforming
street-level bureaucracies, yet a lack of algorithmic transparency may jeopardize
citizen trust. Based on procedural fairness theory, this article hypothesizes that
two core elements of algorithmic transparency (accessibility and explainability) are
crucial to strengthening the perceived trustworthiness of street-level decision-
making. This is tested in one experimental scenario with low discretion (a denied
visa application) and one scenario with high discretion (a suspicion of welfare
fraud). The results show that: (1) explainability has a more pronounced effect on
trust than the accessibility of the algorithm; (2) the effect of algorithmic transpar-
ency not only pertains to trust in the algorithm itself but alsopartiallyto trust
in the human decision-maker; (3) the effects of algorithmic transparency are not
robust across decision context. These findings imply that transparency-as-
accessibility is insufficient to foster citizen trust. Algorithmic explainability must be
addressed to maintain and foster trustworthiness algorithmic decision-making.
Evidence for Practice
Algorithmic transparency consists of accessibility and explainability.
This study finds that accessibility, though important, is not sufficient tofoster trust.
Governments must address the explainability of algorithmic decision-making to
earn citizen trust in algorithms and bureaucrats working with them.
INTRODUCTION
On June 25 and July 7, 2018, the City of Rotterdam used a
system called SyRI (Systeem Risico Indicatie,or:System Risk
Indication) to carry out a risk analysis of welfare fraud on
12,000 addresses in a deprived neighborhood. The risk
analysis used an algorithm that was fed by 17 datasets
containing personal data on someones fiscal, residential,
educational, and labor situation. The city never published
the algorithms parameters and decision rules, nor were
investigated residents informed they were investigated for
welfare fraud. Residents and activists protested and finally,
in 2020, a Dutch Court prohibited governments to use SyRI.
A core reason for this, according to the verdict, was a lack
of transparency of the algorithm used by this system.
The example above highlights the profound implica-
tions of automated decision-making and decision
assistance in street-level bureaucracies. Where past
automation replaced the need for human interference in
high-volume, relatively simple, decisions with little discre-
tion (Bovens and Zouridis 2002), a new generation of
algorithmic applications under the umbrella of artificial
intelligence (AI) is targeted to automate medium and
high discretionary decisions, which are set to affect access
to and apportioning of government resources (Young,
Bullock and Lecy 2019; Zouridis, Van Eck and Bovens
2020). For individual bureaucrats, this means that their
decisions are increasingly being steered and disciplined
by refined computer systems, which will eventually affect
how bureaucrats interact with individual citizens
Received: 9 March 2021 Revised: 8 February 2022 Accepted: 8 February 2022
DOI: 10.1111/puar.13483
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribu tion and reproduction in any medium, provided the
original work is properly cited.
© 2022 The Author. Public Administration Review published by Wiley Periodicals LLC on behalf of American Society for Public Administration.
Public Admin Rev. 2023;83:241262. wileyonlinelibrary.com/journal/puar 241
(Peeters 2020; Peeters & Widlak, 2018). Other authors
highlight that the introduction of algorithms in public
organizations are altering organizational structures, rou-
tines, and culture (Meijer, Lorenz and Wessels 2021; Vogl
et al. 2020) and requires different competencies from
organizational leaders (Coulthart and Riccucci 2021).
While some emphasize the potential of such far-
reaching automated decision-making to make govern-
ment services more equitable and efficient (e.g. Pencheva
et al. 2020), others have heavily criticized this for produc-
ing biased and even discriminatory predictions because
of biased model parameters and/or biased data
(Eubanks 2018;ONeil 2016). Often, human biases are con-
sciously and unconsciously automated and integrated
into automated decision-making.
A criticism underlying these potential biases is a lack of
algorithmic transparency and ultimately accountability
(Busuioc 2020; Meijer and Grimmelikhuijsen 2020). First, a
new generation of algorithms uses techniques to detect
patterns in data using only inputs (e.g. a training dataset
provided by humans). How certain patterns and outputs
based on these input data are generated has been referred
to as an algorithmic black box. Such algorithms are not
readily understandable to humans, making them
unexplainable to citizens (Burrell 2016). Second, algorithms
are sometimes deliberately made inaccessible as they are
often developed by commercial parties and subject and
protected by intellectual property. Other algorithms are
not accessible because governments fear that citizens sub-
ject to those algorithms will game the system once they
have figured out how itworks (Mittelstadt et al. 2016).
The lack of algorithmic transparency in street-level
bureaucracies specifically raises concerns about the trust-
worthiness of bureaucratic decision-making in which
these algorithms play a role (Došilovi
c et al. 2018; Widlak
and Peeters 2018). An elaborate body of literature on pro-
cedural fairness shows that decisions that are not well-
explained or not open to comment are less acceptable
and decrease trust in the decision-maker (e.g. Lind, Kanfer
and Early 1990; Tyler 2006). Inaccessible and inexplainable
algorithms may therefore erode trust and this has led
computer scientists to put algorithmic transparency cen-
tral as a means towards trustworthy algorithms
(Miller 2019; Rudin 2019). Trustworthy algorithms are cru-
cial for citizens specifically as they are becoming increas-
ingly dependent on these algorithms for the provision of
crucial services, such as welfare, reporting crimes or
applying for a visa extension. Unlike most algorithms in
the private sector, citizens often have no choice other
than to trust that an algorithm treats them fairly.
While the link between algorithmic transparency and
trustworthiness seems straightforward, others argue that
seeing inside a system does not necessarily mean under-
standing its behavior or origins(Annany and
Crawford 2018, 980). Similarly, others argue that it is also
relevant, from a citizen perspective, to provide explana-
tions, which might help to maintain legitimacy (De Fine
Licht and De Fine Licht 2020). In other words, when algo-
rithmic transparency is merely implemented as access to
codethis is important to have accountability, yet this is
unlikely to increase peoples understanding or perceived
trustworthiness of algorithmic decision-making.
Scholars in public administration have also looked at
how citizens view algorithmic versus human decision-mak-
ing. Recent studies have investigated the effect of per-
ceived fairness of automated versus human decision-
making. Especially for complex tasks where humanskills
are deemed important, such as hiring and work evaluation,
algorithms are met with more suspicion by the public
(Lee 2018;Nagtegaal2021). Street-level bureaucracy
research specifically highlights that AI-powered automa-
tion renders new concerns about the trustworthiness of
the decision-making process (Bullock et al. 2020;
Peeters 2020). For instance, decisions may become less tai-
lored in a way that does justice to circumstances unique to
an individual citizen, so-called Einzelfallgerechtigkeit (jus-
tice to an individual case). This has initiated a debate on
whether automation curtails human discretion in street-
level decision-making too much (Buffat 2015).
This study ventures beyond the decision on whether
to (partially) automate street-level decisions or not, by
focusing on how algorithmic decision-making can be
designed once they are implemented in practice. Because
of various waves of automation in street-level decision-
making, algorithms are already part and parcel of many
street-level bureaucracies (e.g. Bovens and Zouridis 2002).
Furthermore, many street-level decisions are not purely
algorithmor neither are they purely human; in many
cases, automation supports or supplants only part of a
decision-making process (Young, Bullock and Lecy 2019;
Zouridis, Van Eck and Bovens 2020). Therefore, this article
builds on previous work by testing various elements of
algorithmic transparency in human-machine interaction,
rather than exploring the effects of algorithmic versus
human decision-making (e.g. Nagtegaal 2021; Schiff,
Schiff and Pierson 2021).
The importance of algorithmic transparency is often
highlighted as a mechanism to ensure trustworthy algo-
rithms, yet this hypothesized effect is debated and has
been little tested empirically in a public administration
context. Furthermore, this study will provide a more
refined test as it will conceptually distinguishand
empirically testthe effect of both accessibility and
explainability on the perceived trustworthiness of
automated decision-making. More systematic, and more
in-depth empirical research in our field is needed. The fol-
lowing research question is central to this article:
What is the effect of algorithmic transparency on the
perceived trustworthiness of automated decision-making?
To answer this question, I developed two related
scenario-based survey experiments. Both experiments
employed a between-subjects 2 2 factorial design. Each
factor independently varies one particular dimension of
algorithmic transparency: explainability and accessibility.
242 ALGORITHMIC TRANSPARENCYS EFFECT ON AUTOMATED DECISION-MAKING

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT