AuthorHarvis-Nazzario, Leigh

    Many companies are exploring ways that they can integrate predictive hiring solutions into the application process, interviewing process, and hiring process. While these systems can have benefits like efficiency, performance, and retention, these systems perpetuate systemic bias. Whether this bias is unconscious or not, we are complacent if we allow these hiring solutions to be developed without holding companies accountable. Bias which creeps into hiring algorithms outweighs the efficiency and productivity of these solutions. While cost and efficiency are certainly an employer's concern, so is the ability to eliminate bias in the hiring process. Bias will arise when humans alone are responsible for developing hiring solutions. If the training data is biased and not fully representative, the predictive models will produce inequitable results.

    This note will discuss the background and evolution of how bias has arisen in the context of hiring technologies. This note will also discuss current laws and regulations for anti-discriminatory hiring practices as they exist today, how policymakers are addressing algorithmic discrimination, concluding with the future of automated hiring.


    When looking at historical data and trends, algorithms can predict what someone will buy, whom they will date, what TV shows they will watch, and whom they should hire. Algorithmic technology is gaining popularity across all sectors. Zion Market Research published a report claiming that the predictive analytics market will be worth $10.95 billion by 2022, a 21% increase over the last five years. (1) This rapid growth is due to the expansive amounts of data available within various sectors and the ability of organizations to use predictive analytics to assist them in the decision-making process. (2)


      Algorithms are a set of instructions that a computer uses to solve a problem. (3) You can think of an algorithm like a cooking recipe. Like a recipe, there is a list of steps which help you prepare that duck a l'orange that you watched on last night's episode of Top Chef. Algorithms are simply if-then statements. (4) If] add these ingredients in this order, then a Michelin Star meal will be created. You can also think of algorithms as a search query; the answer that is returned depends on the question. The input impacts the output, and the output changes how you respond. For example, if you ask a friend where she was born, your response might be different if she says she was born in your hometown versus Paris, France. (5) In the context of computer technology, a machine will start to learn your behavior (machine learning), so depending on the pages you follow, the ads you click, and the searches you perform, the algorithms will make recommendations based on your preferences. (6) The input impacts the output.

      However, computers must be trained to make predictions, suggestions, and recommendations. (7) Once the computer has received a significant amount of data, it will make predictions about the patterns it sees in the data. (8) For example, if you want to train a computer to recognize a book, you feed the computer data to indicate what metrics make an object a book and which objects exclude it from being a book. (9) The hope is that after much testing and tweaking, the system will be able to identify when an object is a book. (10)

      Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL), developed an algorithm that learned how to predict the future. (11) While this algorithm cannot predict the winning lottery numbers, it can predict how people will greet each other. (12) The algorithm looked at over 600 hours of footage from shows like "The Office" and "Desperate Housewives." When the algorithm was shown new footage that it had not seen before, it could accurately predict how people would greet each other forty-three percent of the time. (13) In contrast, humans could predict a greeting seventy-one percent of the time. (14) Carl Vondrick, a CSAIL Ph.D. student on the project, stated, "[h]umans automatically learn to anticipate actions through experience, which is what made us interested in trying to imbue computers with the same sort of common sense." (15)

      Industry leaders are unsure as to whether machine learning can replace people. (16) Data scientists must verify whether the predictions or correlations make sense because often times, the reasoning and calculations do not align. (17) For example, a financial services company discarded a finding that correlated an individual's height to their ability to repay a loan. (18) "Algorithms can make systems smarter, but without adding a little common sense into the equation they can still produce some pretty bizarre results." (19)


      Hiring the right people in an organization is critical to a company's success. Not only will productivity increase, but workplace culture will improve. (20) Hiring the wrong employees, on the other hand, can prove to be very costly for the organization and produce harmful effects in the workplace. (21) According to the Department of Labor, the price of a bad hire can be at least thirty percent of an employee's earnings in their first years. (22) Onboarding alone can cost an employer $240,000 for each employee. (23) However, hiring a bad employee can cost as much as $840,000. (24)

      While cost and efficiency are certainly an employer's concern, so is the ability to eliminate bias in the hiring process. (25) In the article, Hiring Fairly in the Age of Algorithms, the authors provide examples of bias that arise when humans alone are responsible for hiring. (26) One example of bias is the "bandwagon effect" which occurs when one negative opinion may affect the group, ultimately harming an otherwise qualified candidate. (27) Bias can also arise when a singular belief about a group is applied to every member of that society. (28) In the 1990s, 476 hiring audits were conducted in Washington, D.C. and Chicago. The audits uncovered bias that unfairly impacted Black job seekers. (29) The audit uncovered that one of five audits had a White candidate proceed further in the interview process than a Black candidate of equal qualifications, and only five percent of Black candidates received a job offer when a White candidate of equal qualification did not. The author's found that "humans systematically make decisions that result in seemingly unfair outcomes for members of different groups." (30) Therefore, for the last twenty years, eliminating human bias within the hiring process has become a goal of companies using hiring algorithms. (31) This is why major organizations, from Target to PepsiCo, use predictive algorithms in the hiring process. (32) These algorithmic solutions operate in the earliest stages of the hiring process, including the personalization of a job advertisement, candidate selection, and virtual interviews. (33)

      The evolution of algorithms used for hiring began in the 1990s when launched the first job board, search engine, and pay-per-click advertisements. (34) Applying for jobs became more accessible and automated, leading employers to adopt applicant tracking systems to assist with tracking and organization. (35) From there came the ability for recruiters to source candidates from various sources, such as Linkedln. (36) Tools like Linkedln expanded the recruiter's search from only active candidates that are readily applying to jobs, to more passive candidates who are not looking for new opportunities. (37) As the volume of job applicants increased, employers turned to new and more advanced data solutions to help maintain the influx of increased candidate flow. (38)

      Today, most hiring solutions have predictive algorithms built into their platforms, and employers are adopting these hiring solutions to improve recruiting metrics like time and cost to hire, quality of hire, and diversity metrics. (39) Predictive hiring solutions aim to improve these metrics, but there may be a cost - bias may be built into the algorithms. (40)


      Vendors developing predictive hiring solutions claim that these solutions will reduce bias and empower hiring teams to make fairer decisions in the hiring process. (41) However, organizations that adopt these solutions should proceed with caution. Predictive hiring systems can be biased depending on who builds the algorithms and how they are used. (42) Algorithms are developed in a "corporate black box", making it difficult for an end-user to know how their data is managed. (43) Unfortunately, some algorithms produce biased results and due to the lack of regulations, transparency, auditing, and control, bias becomes challenging to prevent and correct.

      Asking a computer to identify a "book" is a simple question. However, what happens when the computer is asked to perform more complicated tasks with more serious consequences? A developer creating an algorithm needs to be sure that the data is thoroughly checked, balanced, and appropriately selected, which is often not the case, leading to algorithmic bias. (44)

      Algorithmic bias exists because there is both cognitive bias and a lack of complete data that does not represent an entire population. (45) These two areas are inter-connected. If the training data is biased, not fully representative, or inaccurate, the predictive models will be flawed, producing inequitable results. (46) For example, an employer might select a group of top performers as a basis for the training data because they want to hire similar employees. However, if the company used biased data to select the top performers originally, then they are applying the same biased data to a new group of hires. Hiring teams might also give more weight to employees that are rated higher by an algorithm. This phenomenon is known as...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT