Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges

Date01 April 2018
Published date01 April 2018
AuthorAlun Preece
DOIhttp://doi.org/10.1002/isaf.1422
Received: 31 July 2017 Revised: 12 January 2018 Accepted: 29 January 2018
DOI: 10.1002/isaf.1422
RESEARCH ARTICLE
Asking ‘Why’ in AI: Explainability of intelligent
systems – perspectives and challenges
Alun Preece
Crime and Security Research Institute, Cardiff
University,FriaryHouse, Greyfriars Road
Cardiff,CF10 3AE, UK
Correspondence
Alun Preece, Crime and Security Research
Institute, Cardiff University,FriaryHouse,
GreyfriarsRoad, Cardiff, CF10 3AE, UK.
Email: PreeceAD@cardiff.ac.uk
Fundinginformation
U.S.Army Research Laboratory; UK Ministry of
Defence, Grant/AwardNumber:
W911NF-16-3-0001
Summary
Recent rapid progress in machine learning (ML), particularly so-called ‘deeplearning’, has led to
a resurgence in interest in explainabilityof artificial intelligence (AI) systems, reviving an area of
research dating back to the 1970s. The aim of this article is to view current issues concerning
ML-basedAI systems from the perspective of classical AI, showing that the fundamental problems
are far from new,and arguing that elements of that earlier work offer routes to making progress
towards explainableAI today.
KEYWORDS
artificial intelligence, explainability,interpretability, machine learning
1INTRODUCTION
An explanation is commonly defined as a reason or justification given
for an action or belief.Typically, an explanation providesnew informa-
tion linked to the thing that it is intended to explain and, as with all
information, is subject to interpretation by its recipients. In psycholog-
ical terms, explanations are characterized by a variety of models and
schemas,including causal structures, domain-specific patterns (e.g., sci-
entific explanations),and cultural schemas (Keil, 2006).
Artificialintelligence (AI) is concerned with the creation of computer
systems (or ‘agents’)that take actions or express beliefs based on pro-
cesses that, if exhibited by a natural agent, would be considered as
‘intelligent’ (Russell and Norvig, 2010). It therefore follows that the
generationof explanations has always been a keyissue in AI: developers
and users of AI systems need to be able to obtain reasons or justifica-
tions for the actions or outputs of the machine, and often expect the
system to generate explanationsthat exhibit traces of ‘intelligent pro-
cessing’. As with all explanations, those from an AI system are subject
to interpretation, and therefore need to use communicable represen-
tations such as mathematical, logical, linguistic, or visual forms.
The interest in explainability of AI systems is naturally linked to
surges of interest in AI. The ‘classical’period of progress in AI — from
the 1970s to early 1990s — featured a corresponding phase of inter-
estin methods for explanation generation in largely symbolic reasoning
systems, including so-called ‘expert systems’ (Jackson, 1999). Signifi-
cant progress was made on explainabilityduring this period, with solid
principles established, but the problem was not considered to have
been completely solved.
The recent rapid progress in machine learning (ML), particularly
so-called ‘deep learning’ (LeCun, Bengio, and Hinton, 2015), has led
to a resurgence in interest in explainability.1Issues of transparency
and accountability have been highlighted as specific areas of concern
(Diakopoulos, 2016). Transparency is increasingly viewed from a legal
and ethical standpoint as well as a technical one. There is growing con-
cern around issues of fairness in machine decision making, particularly
arising from biases in the data on which machine learning or statistical
decision-support algorithms are trained(Olhede and Rodrigues, 2006).
These issues are particularly problematic from a societal perspective
where the algorithmic biases relate to characteristics associated with
equality and diversity,e.g., gender, race, or religion (Caliskan, Bryson,
and Narayanan, 2017). Moreover, there are international efforts to
enshrine algorithmic decision making within legal frameworks; for
example,the European Union's proposed General Data Protection Reg-
ulationis due to come into force in 2018, creating a ‘right to explanation’
entitlingan individual to receive an explanation of any decision made by
an algorithm about them (Goodmanand Flaxman, 2016).
The aim of this article is to view these current issues concerning
ML-based AI systems from the perspective of classical AI, showing
that the fundamental problems are far from new,and arguing that ele-
ments of that earlier work offer routes to making progress towards
explainableAI today.Section 2 reviews progress in explanation genera-
tion during the 1970s–1990s knowledge-based systems era. Section 3
Intell Sys Acc Fin Mgmt. 2018;25:63–72. wileyonlinelibrary.com/journal/isaf Copyright© 2018 John Wiley& Sons, Ltd. 63

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT