‘Just like I thought’: Street‐level bureaucrats trust AI recommendations if they confirm their professional judgment
Published date | 01 March 2023 |
Author | Friso Selten,Marcel Robeer,Stephan Grimmelikhuijsen |
Date | 01 March 2023 |
DOI | http://doi.org/10.1111/puar.13602 |
RESEARCH ARTICLE
‘Just like I thought’: Street-level bureaucrats trust AI
recommendations if they confirm their professional judgment
Friso Selten
1
| Marcel Robeer
2
| Stephan Grimmelikhuijsen
3
1
Institute of Public Administration, Leiden
University, The Hague, The Netherlands
2
National Police Lab AI, Utrecht University,
Utrecht, The Netherlands
3
Utrecht University School of Governance,
Utrecht, The Netherlands
Correspondence
Friso Selten, Institute of Public Administration,
Leiden University, Turfmark 99, 2511 DP, The
Hague, The Netherlands.
Email: f.j.selten@fgga.leidenuniv.nl
Funding information
Dutch National Science Foundation,
Grant/Award Number: 406.DI.19.011
[Correction added on 6 February 2023, after first
online publication: The copyright line was
changed.]
Abstract
Artificial Intelligence is increasingly used to support and improve street-level
decision-making, but empirical evidence on how street-level bureaucrats’work is
affected by AI technologies is scarce. We investigate how AI recommendations
affect street-level bureaucrats’decision-making and if explainable AI increases
trust in such recommendations. We experimentally tested a realistic mock predic-
tive policing system in a sample of Dutch police officers using a 2 2 factorial
design. We found that police officers trust and follow AI recommendations that
are congruent with their intuitive professional judgment. We found no effect of
explanations on trust in AI recommendations. We conclude that police officers do
not blindly trust AI technologies, but follow AI recommendations that confirm
what they already thought. This highlights the potential of street-level discretion
in correcting faulty AI recommendations on the one hand, but, on the other hand,
poses serious limits to the hope that fair AI systems can correct human biases.
Evidence for practice
•Artificial Intelligence-based recommendations play an increasingly important
role in supporting decision-making by street-level bureaucrats, such as police
officers.
•Street-level bureaucrats trust and follow AI recommendations that are congruent
with their intuitive professional judgment.
•AI systems do not overturn intuitive professional judgments, even if they are
well-explained.
Artificial Intelligence (AI) is rapidly changing public orga-
nizations across the globe (Young et al., 2019). Specifi-
cally, machine learning approaches not only automate
routine administrative tasks, but are used to design AI
systems that improve the quality of discretionary
decision-making of street-level bureaucrats by steering
their judgment (Bullock, 2019; Zouridis et al., 2020).
However, how street-level bureaucrats interact with AI
systems can be complex. For instance, a predictive polic-
ing system might recommend a police officer to surveil in
a certain area, while the police officer thinks that other
neighborhoods have much higher crime risks. Similarly,
an AI system might recommend that a defendant should
be released on parole, while the judge believes the defen-
dant should remain in custody (Brayne & Christin, 2021).
Street-level bureaucrats, confronted with such a dilemma,
have to decide: do they follow the AI recommendation or
their own intuitive professional judgment?
Scholars have noted that the empirical knowledge of
the impact of AI on street-level bureaucrats’behavior is
limited (Giest & Grimmelikhuijsen, 2020; Peeters, 2020).
Therefore, the first aim of this article is to investigate what
happens when AI recommendations are congruent or
incongruent with a street-level bureaucrats’intuitive pro-
fessional knowledge, that is, their expertise based on
training activities and on-the-ground experience (Maynard-
Moody & Musheno, 2000). We test two prominent and
competing theories from psychology to better understand
how professional knowledge and AI recommendations
interact: automation bias and confirmation bias.
Received: 11 February 2022 Revised: 9 December 2022 Accepted: 13 December 2022
DOI: 10.1111/puar.13602
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribu tion and reproduction in any medium, provided the
original work is properly cited.
© 2023 The Authors. Public Administration Review published by Wiley Periodicals LLC on behalf of American Society for Public Administration.
Public Admin Rev. 2023;83:263–278. wileyonlinelibrary.com/journal/puar 263
On the one hand, the use of AI can restrict the exer-
cise of frontline discretion because decision makers are
overconfident in the rationality of AI (Skitka et al., 1999;
Young et al., 2019). Such automation bias leads users to
wrongfully neglect evidence that originates from outside
a computer system. Automation bias has been found in
highly automated environments such as aviation and
health care (Lyell & Coiera, 2017), and would indeed sug-
gest that street-level decision-making is strongly affected
by computer prompts. On the other hand, from the litera-
ture on motivated reasoning and confirmation bias,we
know that individuals tend to stick with a preferred conclu-
sion and that this leads to selective and biased information
processing (Kunda, 1990; Taber & Lodge, 2006). Such con-
firmation bias also occurs amongst those who are more
knowledgeable about a topic (Mendel et al., 2011). This
would suggest that street-level bureaucrats will not follow
all AI recommendations but ignore AI-generated output in
case it contradictstheir professional knowledge.
The second aim of this paper is to investigate how
explainable AI (XAI) affects the trustworthiness and accep-
tance of AI recommendations. There is a growing demand
for AI that not only performs well, but that is also transpar-
ent, explainable, and trustworthy (Giest & Grimmelikhuijsen,
2020). This is the goal of a specific area of AI research called
explainable AI (XAI) (Adadi & Berrada, 2018). Research into
XAI demonstrated thatexplanations enabled spotting algo-
rithmic mistakes (Ribeiro et al., 2016). At the same time,
XAI can have negative effects. Van der Waa et al. (2021)
demonstrated that explanations can persuade users to fol-
low incorrect recommendations. Overall, the empirical
knowledge on the impact of XAI is limited, specifically
within complex public decision-making processes (Giest &
Grimmelikhuijsen, 2020; Peeters, 2020).
This article investigates the effects of AI recommenda-
tions on a typical street-level bureaucrat: the police officer
(Lipsky, 2010; Maynard-Moody & Musheno, 2003). Investi-
gating the effects of AI recommendations on police offi-
cers is especially relevant as the police force is one of the
largest public-sector areas in which AI systems are being
implemented that can heavily infringe on people’s lives
(e.g. Meijer et al., 2021). In addition, in many countries the
police are at the forefront of AI adoption. Police organiza-
tions use AI systems to, for instance, forecast high crime
risk areas, pre-identify young offenders, analyze vehicle
movement patterns, and assist citizens with crime report-
ing (Dechesne et al., 2019; Meijer & Wessels, 2019).
At the same time, police work cannot be completely
automated given thehigh degree of uncertainty and politi-
cal sensitivity associated with the tasks they perform
(Bullock et al., 2020). Police officers and AI systems, there-
fore, have to interact and collaborate. In the present study,
we research this interaction by investigating how police
officers utilize AI recommendations that are congruent and
incongruent with their professional judgment and how
explainable AI affects how they perceive these recommen-
dations. We investigate the following research question:
What is the effect of AI recommendations and explain-
able AI on decision-making of street-level police officers?
To answer this question, we designed a 2 2
repeated measures factorial vignette experiment in which
we tested how police officers interact with a realistic
mock AI system that assists police officers with fencing
off the area of a crime. This application is based on an AI
system currently being developed by the Dutch police. A
population-based sample of 124 street-level police was
recruited for the experiment. Participants completed
three similar vignettes with high mundane realism, result-
ing in 294 observations in total. Participants were
exposed to a combination of the following two factors: an
AI recommendation that was congruent or incongruent
with their intuitive professional knowledge (first factor),
and an AI recommendation that was explained or unex-
plained (second factor).
The results of this study indicate that police officers
only trust AI recommendations that confirm what they
already thought; police officers have more trust in AI rec-
ommendations that are congruent with their professional
knowledge than AI recommendations that are incongru-
ent with their professional knowledge. This implies that
rather than being subject to automation bias, street-level
bureaucrats are prone to confirmation bias when interact-
ing with AI systems. Moreover, we found that police
officers’trust in AI recommendations is not affected by
AI-generated explanations (XAI), meaning that it will be
hard to overturn intuitive professional judgments, even if
AI recommendations are well-explained. In the next sec-
tion, we will first discuss the role of AI in street-level
decision-making and then turn to formulating and testing
three hypotheses on how (explained) AI recommenda-
tions are expected to impact street-level decision makers.
ARTIFICIAL INTELLIGENCE IN STREET-LEVEL
BUREAUCRACY
Street-level decision-making is characterized by the exer-
cise of administrative discretion (Maynard-Moody &
Musheno, 2003, 9). Exercising administrative discretion is
necessary because of a mismatch between general rules
and their application in specific local situations. Public offi-
cials are expected to base their decisions on pre-defined
laws, procedures, and standards but these rules hardly ever
fully correspond to the complex local realities of frontline
work. Street-level bureaucrats translate general rules and
competing values into client-level decisions (Lipsky, 2010).
This constitutes administrative discretion: “the freedom
that street-level bureaucrats have in determining the sort,
quantity, and quality of sanctions and rewards during pol-
icy implementation”(Tummers & Bekkers, 2014, p. 529).
Administrative discretion has positive and negative
consequences. The advantage of administrative discretion
is that it allows for experience, local knowledge, sympa-
thy, empathy, insight, and flexibility in frontline work
264 JUST LIKE I THOUGHT
To continue reading
Request your trial