The Scored Society: Due Process for Automated Predictions
Publication year | 2021 |
Procedural regularity is essential for those stigmatized by "artificially intelligent" scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores.
INTRODUCTION TO THE SCORED SOCIETY .................................. 2
I. CASE STUDY OF FINANCIAL RISK SCORING ........................ 8
A. A (Very) Brief History of Credit Scoring Systems ................ 8
B. The Problems of Credit Scoring ........................................... 10
1. Opacity ........................................................................... 10
2. Arbitrary Assessments .................................................... 11
3. Disparate Impact ............................................................. 13
C. The Failure of the Current Regulatory Model ...................... 16
II. PROCEDURAL SAFEGUARDS FOR AUTOMATED SCORING SYSTEMS ................................................................... 18
A. Regulatory Oversight over Scoring Systems ........................ 20
1. Transparency to Facilitate Testing .................................. 24
2. Risk Assessment Reports and Recommendations .......... 25
B. Protections for Individuals ................................................... 27
1. Notice Guaranteed by Audit Trails ................................. 28
2. Interactive Modeling ....................................................... 28
C. Objections ............................................................................. 30
CONCLUSION ...................................................................................... 32
-Dave Eggers,
INTRODUCTION TO THE SCORED SOCIETY
In his novel
Eggers's imagination is not far from current practices. Although predictive algorithms may not yet be ranking high school students nationwide, or tagging criminals' associates with color-coded risk assessments, they are increasingly rating people in countless aspects of their lives.
Consider these examples. Job candidates are ranked by what their online activities say about their creativity and leadership.(fn5) Software engineers are assessed for their contributions to open source projects, with points awarded when others use their code.(fn6) Individuals are assessed as likely to vote for a candidate based on their cable-usage patterns.(fn7) Recently released prisoners are scored on their likelihood of recidivism.(fn8)
How are these scores developed? Predictive algorithms mine personal information to make guesses about individuals' likely actions and risks.(fn9) A person's on- and offline activities are turned into scores that rate them above or below others.(fn10) Private and public entities rely on predictive algorithmic assessments to make important decisions about individuals.(fn11)
Sometimes, individuals can score the scorers, so to speak. Landlords can report bad tenants to data brokers while tenants can check abusive landlords on sites like ApartmentRatings.com. On sites like Rate My Professors, students can score professors who can respond to critiques via video. In many online communities, commenters can in turn rank the interplay between the rated, the raters, and the raters of the rated, in an effort to make sense of it all (or at least award the most convincing or popular with points or "karma").(fn12)
Although mutual-scoring opportunities among formally equal subjects exist in some communities, the realm of management and business more often features powerful entities who turn individuals into ranked and rated
And there is far more to come. Algorithmic predictions about health risks, based on information that individuals share with mobile apps about their caloric intake, may soon result in higher insurance premiums.(fn18) Sites soliciting feedback on "bad drivers" may aggregate the information, and could possibly share it with insurance companies who score the risk potential of insured individuals.(fn19)
The scoring trend is often touted as good news. Advocates applaud the removal of human beings and their flaws from the assessment process. Automated systems are claimed to rate all individuals in the same way, thus averting discrimination. But this account is misleading. Because human beings program predictive algorithms, their biases and values are embedded into the software's instructions, known as the source code and predictive algorithms.(fn20) Scoring systems mine datasets containing inaccurate and biased information provided by people.(fn21) There is nothing unbiased about scoring systems.
Supporters of scoring systems insist that we can trust algorithms to adjust themselves for greater accuracy. In the case of credit scoring, lenders combine the traditional three-digit credit scores with "credit analytics," which track consumers' transactions. Suppose credit-analytics systems predict that efforts to save money correlates with financial distress. Buying generic products instead of branded ones could then result in a hike in interest rates. But, the story goes, if consumers who bought generic brands also purchased items suggesting their financial strength, then all of their purchases would factor into their score, keeping them from being penalized from any particular purchase.
Does everything work out in a wash because information is seen in its totality? We cannot rigorously test this claim because scoring systems are shrouded in secrecy. Although some scores, such as credit, are available to the public, the scorers refuse to reveal the method and logic of their predictive systems.(fn22) No one can challenge the process of scoring and the results because the algorithms are zealously guarded trade secrets.(fn23) As this Article explores, the outputs of credit-scoring systems undermine supporters' claims. Credit scores are plagued by arbitrary results. They may also have a disparate impact on historically subordinated groups.
Just as concerns about scoring systems are more acute, their human element is diminishing. Although software engineers initially identify the correlations and inferences programmed into algorithms, Big Data promises to eliminate the human "middleman" at some point in the process.(fn24) Once data-mining programs have a range of correlations and inferences, they use them to project new forms of learning. The results of prior rounds of data mining can lead to unexpected correlations in click-through activity. If, for instance, predictive algorithms determine not only the types of behavior suggesting loan repayment, but also automate the process of learning which adjustments worked best in the past, the computing process reaches a third level of sophistication: determining
To continue reading
Request your trial