The Scored Society: Due Process for Automated Predictions

Publication year2021

THE SCORED SOCIETY: DUE PROCESS FOR AUTOMATED PREDICTIONS

Danielle Keats Citron(fn*) & Frank Pasquale(fn**)

Abstract: Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers-or deadbeats, shirkers, menaces, and "wastes of time." Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail-credit-the law focuses on credit history, not the derivation of scores from data.

Procedural regularity is essential for those stigmatized by "artificially intelligent" scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems could launder biased and arbitrary data into powerfully stigmatizing scores.

INTRODUCTION TO THE SCORED SOCIETY .................................. 2

I. CASE STUDY OF FINANCIAL RISK SCORING ........................ 8

A. A (Very) Brief History of Credit Scoring Systems ................ 8

B. The Problems of Credit Scoring ........................................... 10

1. Opacity ........................................................................... 10

2. Arbitrary Assessments .................................................... 11

3. Disparate Impact ............................................................. 13

C. The Failure of the Current Regulatory Model ...................... 16

II. PROCEDURAL SAFEGUARDS FOR AUTOMATED SCORING SYSTEMS ................................................................... 18

A. Regulatory Oversight over Scoring Systems ........................ 20

1. Transparency to Facilitate Testing .................................. 24

2. Risk Assessment Reports and Recommendations .......... 25

B. Protections for Individuals ................................................... 27

1. Notice Guaranteed by Audit Trails ................................. 28

2. Interactive Modeling ....................................................... 28

C. Objections ............................................................................. 30

CONCLUSION ...................................................................................... 32

[Jennifer is] ranked 1,396 out of 179,827 high school students in Iowa. . . . Jennifer's score is the result of comparing her test results, her class rank, her school's relative academic strength, and a number of other factors. . . .

[C]an this be compared against all the other students in the country, and maybe even the world? . . .

That's the idea . . . .

That sounds very helpful. . . . And would eliminate a lot of doubt and stress out there.

-Dave Eggers, The Circle(fn1)

INTRODUCTION TO THE SCORED SOCIETY

In his novel The Circle, Dave Eggers imagines persistent surveillance technologies that score people in every imaginable way. Employees receive rankings for their participation in social media.(fn2) Retinal apps allow police officers to see career criminals in distinct colors-yellow for low-level offenders, orange for slightly more dangerous, but still nonviolent offenders, and red for the truly violent.(fn3) Intelligence agencies can create a web of all of a suspect's contacts so that criminals' associates are tagged in the same color scheme as the criminals themselves.(fn4)

Eggers's imagination is not far from current practices. Although predictive algorithms may not yet be ranking high school students nationwide, or tagging criminals' associates with color-coded risk assessments, they are increasingly rating people in countless aspects of their lives.

Consider these examples. Job candidates are ranked by what their online activities say about their creativity and leadership.(fn5) Software engineers are assessed for their contributions to open source projects, with points awarded when others use their code.(fn6) Individuals are assessed as likely to vote for a candidate based on their cable-usage patterns.(fn7) Recently released prisoners are scored on their likelihood of recidivism.(fn8)

How are these scores developed? Predictive algorithms mine personal information to make guesses about individuals' likely actions and risks.(fn9) A person's on- and offline activities are turned into scores that rate them above or below others.(fn10) Private and public entities rely on predictive algorithmic assessments to make important decisions about individuals.(fn11)

Sometimes, individuals can score the scorers, so to speak. Landlords can report bad tenants to data brokers while tenants can check abusive landlords on sites like ApartmentRatings.com. On sites like Rate My Professors, students can score professors who can respond to critiques via video. In many online communities, commenters can in turn rank the interplay between the rated, the raters, and the raters of the rated, in an effort to make sense of it all (or at least award the most convincing or popular with points or "karma").(fn12)

Although mutual-scoring opportunities among formally equal subjects exist in some communities, the realm of management and business more often features powerful entities who turn individuals into ranked and rated objects.(fn13) While scorers often characterize their work as an oasis of opportunity for the hardworking, the following are examples of ranking systems that are used to individuals' detriment. A credit card company uses behavioral-scoring algorithms to rate consumers' credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services.(fn14) Automated systems rank candidates' talents by looking at how others rate their online contributions.(fn15) Threat assessments result in arrests or the inability to fly even though they are based on erroneous information.(fn16) Political activists are designated as "likely" to commit crimes.(fn17)

And there is far more to come. Algorithmic predictions about health risks, based on information that individuals share with mobile apps about their caloric intake, may soon result in higher insurance premiums.(fn18) Sites soliciting feedback on "bad drivers" may aggregate the information, and could possibly share it with insurance companies who score the risk potential of insured individuals.(fn19)

The scoring trend is often touted as good news. Advocates applaud the removal of human beings and their flaws from the assessment process. Automated systems are claimed to rate all individuals in the same way, thus averting discrimination. But this account is misleading. Because human beings program predictive algorithms, their biases and values are embedded into the software's instructions, known as the source code and predictive algorithms.(fn20) Scoring systems mine datasets containing inaccurate and biased information provided by people.(fn21) There is nothing unbiased about scoring systems.

Supporters of scoring systems insist that we can trust algorithms to adjust themselves for greater accuracy. In the case of credit scoring, lenders combine the traditional three-digit credit scores with "credit analytics," which track consumers' transactions. Suppose credit-analytics systems predict that efforts to save money correlates with financial distress. Buying generic products instead of branded ones could then result in a hike in interest rates. But, the story goes, if consumers who bought generic brands also purchased items suggesting their financial strength, then all of their purchases would factor into their score, keeping them from being penalized from any particular purchase.

Does everything work out in a wash because information is seen in its totality? We cannot rigorously test this claim because scoring systems are shrouded in secrecy. Although some scores, such as credit, are available to the public, the scorers refuse to reveal the method and logic of their predictive systems.(fn22) No one can challenge the process of scoring and the results because the algorithms are zealously guarded trade secrets.(fn23) As this Article explores, the outputs of credit-scoring systems undermine supporters' claims. Credit scores are plagued by arbitrary results. They may also have a disparate impact on historically subordinated groups.

Just as concerns about scoring systems are more acute, their human element is diminishing. Although software engineers initially identify the correlations and inferences programmed into algorithms, Big Data promises to eliminate the human "middleman" at some point in the process.(fn24) Once data-mining programs have a range of correlations and inferences, they use them to project new forms of learning. The results of prior rounds of data mining can lead to unexpected correlations in click-through activity. If, for instance, predictive algorithms determine not only the types of behavior suggesting loan repayment, but also automate the process of learning which adjustments worked best in the past, the computing process reaches a third level of sophistication: determining which metrics for measuring past...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT