The Legal Implications of Explaining Artificial Intelligence

JurisdictionUnited States,Federal,European Union
Publication year2022
CitationVol. 5 No. 3

David van Boven, Paul B. Keller, Harriet Ravenscroft, Jill Ge, Wentao Zhai, and Arwen Zhang *

This article reviews current legal requirements for explaining artificial intelligence and its legal implications and provides an analysis of legislative developments in the European Union, United States, and China.

HAL 9000: "I'm sorry, Dave. I'm afraid I can't do that."

—2001: A Space Odyssey

Artificial Intelligence

The concept of artificial intelligence ("AI") has captured our imagination for generations in books and movies. The underlying technology has been with us for some time, but now society is on the precipice of having this technology in widespread use throughout our daily lives. From healthcare to finance to transportation to farming, AI now touches every aspect of our lives, and that use promises only to increase and become more and more sophisticated.

The promise of AI technology is plain—more efficient use of limited resources to achieve results that are as good, or better, than a human could achieve, and to do so in a fraction of the time. Trust in this technology, however, is still very much in development. As has been repeatedly demonstrated, society is cautious when it comes to new technologies, and it frequently takes years (if not generations) to adopt new technology, even when the facts so plainly indicate that it is better and safer. AI is proving to be no exception.

Some of the issues of trust are rooted in the fundamental nature of the technology. Many of the deep learning neural networks utilize algorithms for machine learning that are unable to be examined after a decision/prediction has been made. These networks

[Page 157]

rearrange their connections and the strength of those connections in response to patterns they see in the data they are processing, which means that once a neural network has been trained, even the designer of that network cannot know exactly how it is doing what it does. People need the power to disagree with, or reject, an automated decision, but this cannot be done if the user is unable to understand an AI system's decision. This lack of transparency therefore creates issues of trust for the user, and is commonly referred to as the "black box" problem.

This issue is not limited to any one particular application of AI, but rather touches on the full spectrum of potential uses. As autonomous vehicles ("AVs") are expected to roam the streets and smart cameras are expected to be increasingly more present in cities and towns, the understanding of what happens under the hood may fade away. As the deployment of AI accelerates, human understanding of AI, "and also the ability to give informed consent, could be left behind." 1 To be able to more fully rely on this technology, gaining trust may require greater transparency and the accuracy of AI needs to improve. As has been previously argued, there is a trade-off between accuracy and explainability.

Explainable AI

This article addresses one of the myriad ways that the industry is trying to address this problem: Explainable AI ("XAI"). XAI seeks to allow humans to understand why the system did what it did and not something else. Akin to an audit trail but much more complex, XAI may offer a means for humans to "check" how the AI is operating and identify the fault (and potential liability) if something goes wrong. The use—and lack thereof—of this technology, therefore, has a number of legal implications. Set out below is a thorough overview of the technology and a discussion of (1) the legal requirements and (2) the legal implications in three of the major markets of the world—the United States, Europe, and China.

AI is here to stay and disputes relevant to clients are inevitable. For instance, in the Dutch courts Uber drivers requested an insight into the company's algorithms. It seems likely that regulators and courts will want to know what is going on "inside" a piece of AI. A company might choose to ignore the risk of having to (i.e., being legally obligated to) provide this information, but the penalties

[Page 158]

(reputational with respect to ethical issues, and financial/operational with respect to legal issues) might be severe. Even if a client does have an XAI system, what might still go wrong?

Many have written about the negative effects of AI. Research has shown that "image search results for certain jobs exaggerate gender stereotypes." 2 Zuiderveen Borgesius argued that while AI may have discriminatory effects—for instance, in the case of search engine image results—the AI itself is not inherently evil. 3 Rather, the dataset may contain biases caused by human biases. AI is merely a complex system, not a nondeterminative system. Any bias in output can therefore only reflect bias in input data or in how the AI is trained. Similarly, an AV is autonomous only in the sense that the end user has to make fewer decisions. Responsibility is not transferred to the AI, however, but to the developer of the AI; in a sense, all future decisions of the AV about how to act in certain situations have been made by the developer, via coding and the choice of training set, before the AV leaves the factory.

Technology brings ethical dilemmas. Nyholm and Smids posed the following scenario: 4 a self-driving car with five passengers approaches another car swerving out of its lane and heading toward the self-driving car. The self-driving car senses the trajectory of the oncoming car. It calculates that a collision is inevitable and will kill the five passengers, unless the self-driving car turns sharply toward the pavement, where a pedestrian is walking. The pedestrian will die from the impact. In this scenario, the humans in the self-driving car cannot take control of the car and thus the car's AI will need to decide. This kind of scenario concerning crashing algorithms, 5 in the literature called an "applied trolley problem" 6 or "collision management," 7 offers a stark example of the ethical dilemmas that an AI may be faced with. 8

However, Roff argues that thinking about the trolley problem distracts from understanding the processes of the AI. 9 A computer program is incapable of "having morals" independent of the opinions/biases put into it. As Goodman and Flaxman pointed out, several studies have focused on algorithmic profiling: by explaining the AI, it is possible to both identify and implement interventions to correct for discrimination. 10 By explaining the AI, users could better understand and trust the machine learning capability. 11 But explainability is traded off with accuracy, some argue. It is accepted that explainable models/basic machine learning algorithms, such as decisions, are easily understandable by the

[Page 159]

human brain, we can simply follow the path the decision tree made to reach the decision. 12 However, these models are not as accurate as they are more simplistic. When we utilize deep learning neural networks, model accuracy increases, but as these models are far more complex, their explainability decreases. Here is the trade-off that has to be made: Do we sacrifice explainability to produce a more accurate model?

It will be argued below that data protection law is not conclusive on the legal requirement of explaining AI. Rather, similar to what Hacker et al. argued, 13 it is expected that XAI will become a legal requirement in other legal domains, such as from a civil liability perspective. Furthermore, sector-specific regulations, such as the draft Digital Services Act, will include transparency requirements regarding the underlying parameters of advertising. Furthermore, the draft AI Regulation contains human oversight, transparency, and traceability requirements for high-risk AI applications.

Data Protection

The EU's data protection law, the General Data Protection Regulation ("GDPR"), regulates the processing of personal data. 14 According to the European Data Protection Board ("EDPB"): "any processing of personal data through an algorithm falls within the scope of the GDPR." 15 If an AI system processes personal data, the GDPR may apply, but not all AI systems process personal data. Still, even if an AI system is not designed to process personal data, the line between personal data and non-personal data is increasingly unclear, which may be caused by a lack of complete and permanent anonymization and re-identification risks from aggregated datasets. 16

One can take the collision management scenario one step further and consider what personal data is needed to make decisions on a basis more sophisticated than a simple utilitarian one (i.e., kill the fewest people), in which case access to personal data is likely needed: age (are younger people of more value than older people?), family life (are parents of more value than people with no dependants?), and health (is someone in good health of more value than someone with a preexisting condition?). The collision management thought experiment highlights the importance of personal data. Modern day cars are equipped with cameras, global positioning

[Page 160]

systems and communication capabilities, 17 while autonomous vehicles are packed with even more sensors. 18

The GDPR sets out rights for data subjects that can invoke such rights vis-a-vis the processors of their personal data. Articles 13-15 provide rights to "meaningful information about the logic involved" in the case of automated decision-making. These articles are referred to by some as the right to an explanation. 19 More specifically, Article 22 of the GDPR provides that automated decision-making that significantly affects the data subject is prohibited unless the decision-making is contractually required and if the data subject rights are properly safeguarded, or when the data subject has consented to it. The functionality and effectiveness of these rights are highly debated, with some arguing that these rights are too limited and too unclear to be meaningful. 20

Recital 71 points to the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT