Accountable Artificial Intelligence: Holding Algorithms to Account

Published date01 September 2021
AuthorMadalina Busuioc
Date01 September 2021
DOIhttp://doi.org/10.1111/puar.13293
Research Article
825
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
Abstract: Artificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and are
transforming our societies. The promise of efficient, low-cost, or “neutral” solutions harnessing the potential of big
data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have
permeated high-stakes aspects of our public existence—from hiring and education decisions to the governmental use of
enforcement powers (policing) or liberty-restricting decisions (bail and sentencing)—this necessarily raises important
accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we
safeguard accountability in algorithmic decision-making? Drawing on a decidedly public administration perspective,
and given the current challenges that have thus far become manifest in the field, we critically reflect on and map
out in a conceptually guided manner the implications of these systems, and the limitations they pose, for public
accountability.
Evidence for Practice
The article provides public sector practitioners with insight into the distinct accountability challenges
associated with the use of AI systems in public sector decision-making.
It digests and explicitly links technical discussions on black-box algorithms as well as explainable AI and
interpretable models—different approaches aimed at model understandability—to public accountability
considerations relevant for public bodies.
It provides specific policy recommendations to securing algorithmic accountability—prominent among
these, the importance of giving preference to transparent, interpretable models in the public sector over
black-box alternatives (whether in a proprietary or in a technical sense, i.e., deep learning models).
This will become critical to administrators’ ability to maintain oversight of system functioning as well as to
their ability to discharge their account-giving duties to citizens for algorithmic decision-making.
A
proprietary algorithm widely used by US
courts to predict recidivism in both bail
and sentencing decisions was flagged by
ProPublica as biased against black defendants (Angwin
et al. 2016); natural language processing (NLP)
algorithms for textual analysis can display recurrent
gender biases (Bolukbasi et al. 2016), for instance,
associating the word “doctor” with “father” and “nurse”
with “mother”; facial recognition algorithms have
persistently been found to display much higher error
rates for minorities (Buolamwini and Gebru 2018;
Lohr 2018; Snow 2018; Medium 2019), potentially
leading to false arrests and discrimination of already
marginalized groups when used in policing (e.g.,
Garvie and Frankle 2016); algorithms used for
university admissions to predict exam grades have
recently shown serious failures, with disparate negative
effects on high-achieving students from disadvantaged
backgrounds (Broussard 2020; Katwala 2020). These
are only a few of the growing number of examples of
bias encountered in algorithmic systems used not only
in private but also public sectors.
While simultaneously algorithmic systems based
on artificial intelligence (AI) are undoubtedly
associated with tremendous technological innovation,
and are foreshadowed “to supercharge the process
of discovery” (Appenzeller 2017), the examples
above underscore the importance of oversight of
AI algorithmic decision-making. As algorithmic
systems have become increasingly ubiquitous in
the public sector, they raise important concerns
about meaningful oversight and accountability
(Bullock 2019; Diakopoulos 2014; European
Parliament Study 2019; Pasquale 2015; Yeung 2018;
Yeung and Lodge 2019; Young, Bullock, and
Lecy 2019) and the need to identify and diagnose
where the potential for accountability deficits
associated with these systems might—first and
foremost—lie.
Madalina Busuioc
Leiden University
Accountable Artificial Intelligence: Holding Algorithms to
Account
Madalina Busuioc is Associate
Professor at the Institute of Public
Administration, Leiden University, where
she leads a large European Research
Council (ERC) grant investigating public
sector reputation and its effects within
the European regulatory state. She is
also incoming Fernand Braudel Senior
Fellow at the European University Institute
(EUI, Florence), awarded for a project on
“Accountable Artificial Intelligence in the
Administrative State”.
Email: e.m.busuioc@fgga.leidenuniv.nl
Public Administration Review,
Vol. 81, Iss. 5, pp. 825–836. © 2020 The Authors.
Public Administration Review
published
by Wiley Periodicals LLC on behalf of The
American Society for Public Administration.
DOI: 10.1111/puar.13293.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT