AuthorAbungu, Cecil

CONTENTS I. Introduction 1 II. Unlawful Discrimination in A Igorithmic Decision-Making 4 a. How Discrimination in Algorithmic Decision-Making Arises 4 b. Why Longstanding Approaches to Preventing and Proving 7 Discrimination are Inadequate III. The Most Promising Proposed Solutions and the Foundation 10 Required for Successful Implementation a. Major Proposed Solutions 10 1. Ex Ante Scrutiny 11 2. Ex Post Scrutiny 14 b. Foundation Required for Successful Implementation 17 IV. Implementing the Most Promising Proposed Solutions in 22 Developing Countries V. Conclusion 24 VI. Annexure 25 a. Use of algorithms in making impactful decisions 25 b. Proactive executive agency or independent 28 commission policing of discrimination c. Vigilant non-government actors attentive to 33 algorithmic decision-making d. A well-rooted culture of transparency 34 e. Regularly released statistical analyses touching 39 on the disparities faced by protected groups I. Introduction

A rich body of research arguing that discrimination is wrong for both deontological (1) and teleological (2) reasons explains the motivation this research of many countries around the world that have set out protected characteristics upon which discrimination by both public and private decision-makers is prohibited. (3) Many developing countries have likewise done so through their constitutions and other legislation, but seem to end their commitment there given that detection and action is left to the injured party. (4) That some part of this tenuous situation has held up (with some discrimination suits still being successfully brought before courts) has more to do with the fact that human conduct is what has been at play so far. The use of algorithms to make decisions further complicates the process of identifying and remedying when discrimination has occurred.

While the use of machine-learning algorithms to make hugely consequential predictions and decisions continues to gain ground, (5) questions about how to police the fairness of such decisions and reduce the disparities faced by marginalized people in protected classes abound. (6) The discourse around these questions is burgeoning in developed countries but remains insufficient in developing countries. (7) The gap is particularly eye-catching since the use of algorithms to make decisions is taking root in the developing world nearly as quickly as it is in the developed world. (8) In any case, global capitalism's history counsels us to expect that people in the developing world will endure aggressive targeting by corporations which profit from selling tools that deploy algorithms in decision-making. (9)

Because of the significant efficiency gaps and low standard of wellbeing developing countries have to contend with (10), these nations readily sympathize with the argument that tools that use algorithmic decision-making should be allowed to freely operate on the basis that the likely benefits outweigh the costs. (11) This claim is powerful, but flawed in seeing people only as a group. (12) Additionally, there is a risk that such tools raise the standards of wellbeing for already-privileged groups of people while expanding inequalities suffered by marginalized people in protected classes. (13) As a result, the 'good' delivered by efficiency is in many cases not good enough. (14)

To further complicate the picture, algorithmic decision-making is creating new challenges for longstanding approaches of preventing and proving direct or indirect discrimination (also referred to hereafter as disparate treatment and disparate impact, respectively). There is now convincing evidence that approaches for preventing discrimination in algorithms which require proof of causation, and significant correlation or exclusion of inputs are no longer tenable, meaning that detection of discrimination requires complicated examination of processes. (15) Moreover, while parties that deploy disparate impact algorithms can easily come up with a legal justification for the values and characteristics that the algorithm detects, those individuals that are discriminated against will find it exceedingly hard to prove that another approach exists that would achieve the purpose of the algorithm without having a discriminatory impact. (16)

The latest solutions for ex ante and ex post scrutiny of algorithmic decision-making seem promising but rest on a foundation that necessitates: (i) a well-rooted culture of transparency and statistical analysis of the disparities faced by protected groups (17) ; (ii) vigilant non-government actors attentive to algorithmic decision-making (18) ; and (iii) reasonably robust and proactive independent or executive branch regulatory policing of discrimination. (19)

This article will show that the current discourse surrounding solutions to algorithmic discrimination is not attuned to the situation in a vast majority of developing countries. These developing countries often lack rich statistical analyses on the disparities faced by protected groups, and struggle with negligible. Furthermore, it is common for civil society groups to show little interest in algorithmic decision-making and the administrative state plays no identifiable role in policing discrimination. This article argues that if these issues are ignored while algorithmic decision-making is allowed to take root in those countries, the result might be a future of increased disparities faced by groups which the individuals and institutions of the country have already marginalized. (20)

Under the assumption that the age of algorithmic decision-making is to result in narrower disparities and less discriminatory conduct suffered by the protected groups in developing countries, I propose that policymakers, lawmakers, researchers, donors, and civic activists need to invest their wealth and efforts on mitigating the discriminatory impact of algorithmic decision-making. In an era where algorithmic tools are primarily designed by "people from the North," (21) the perspective that this study presents will also point out questions that developers need to consider as they design algorithmic tools.

This article will touch on discrimination by algorithms used by public and private bodies. Further, the article aims at the substantive rather than procedural goal of anti-discrimination law. (22) It will proceed as follows. Section II will consider unlawful discrimination in algorithmic decision-making, including how it arises and why the longstanding approaches that countries use to prevent and prove discrimination wither when confronted by algorithms. In Section III, the article will review the most promising approaches designed to ameliorate the challenge that algorithmic decision-making poses. This section will also discuss the foundation required for the success of those approaches. Section IV of the article will demonstrate that the foundation required for the successful implementation of the approaches does not exist in a vast majority of developing countries. It will also propose the way forward. Finally, the article concludes in Section V.

  1. Unlawful Discrimination in Algorithmic Decision-Making

    1. How Discrimination in Algorithmic Decision-Making Arises

      At its essence, machine learning involves the development of algorithms which enable a computerized system to analyze a dataset and yield functions (also known as rules or models that are deterministic mappings from a set of input values to one or more output values). (23) On the other hand, algorithms can simply be defined as complex processes that a computer follows to reach decisions. (24) In machine learning, an initial algorithm gives the computerized system a function that guides its analysis of complex datasets to find recurring patterns. (25) From the captured patterns, the computerized system creates another algorithm with an updated function, and uses this function to analyze and reach decisions or predictions about similar real-life datasets. (26)

      The journey to creating--or constantly updating--the algorithm that gives the final prediction or decision is known as "training." (27) In the course of training, the initial algorithm processes a dataset (known as the training dataset) and comes up with the function which best matches the patterns in the dataset. (28) That function is then encoded in the computer as a model (29) to be used by the other algorithm to make inferences out of new datasets. While the model that emerges usually captures patterns, associations, or correlations in a dataset, it does not explain the cause or nature of these links. (30)

      It would be naive and dangerous to believe that algorithms make decisions (inferences) in an objective and bias-free way. (31) So long as any aspect of algorithms' connection of patterns and correlations in the big data they assess is in any manner dependent on human interpretation, (32) they cannot be bias-free. Because existing bias is often the result of long histories of structural injustice, it is difficult to extricate it out of the training datasets fed into machine-learning algorithms, especially since such a move might reduce the accuracy of inferences that an algorithm makes. (33) The bias is of course not always wrong or unlawful--however, the bias becomes unlawful when it reaches a point that the government has prohibited under anti-discrimination law. (34)

      Algorithmic discrimination can arise out of one or a combination of the following: modelling, training, and usage. In modelling, consider that large and complex datasets usually present more than one fitting function to an algorithm. To help the algorithm select an exact function, human beings supplement the information provided by the dataset with a series of assumptions about the characteristics of the best function. This is what is known as an inductive bias. (35) For instance, a screening algorithm for employees will be designed in line with human classification of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT