Big Data, Bigger Risk: Recognizing and Managing the Perils of Using Algorithms in Recruiting and Hiring

JurisdictionUnited States,Federal
Publication year2019
CitationVol. 2 No. 4

Mark J. Girouard*

Machine learning algorithms have the potential to significantly streamline the recruiting and hiring process, from identifying qualified passive candidates to efficiently winnowing down the increasingly large volume of applications that employers now regularly receive. The author of this article discusses the use of algorithms in sourcing, recruiting, and selecting talent and offers recommendation for employers.

Increasingly, employers are turning to machine learning algorithms or other "big data" solutions to source, recruit, screen, select, and manage talent. These tools have the potential to significantly streamline the recruiting and hiring process, from identifying qualified passive candidates to efficiently winnowing down the increasingly large volume of applications that employers now regularly receive. While these tools can reduce the time and costs associated with finding and selecting talent, they also have the potential to create legal risk for employers.

Background

These developments have not gone unnoticed by regulators or private litigants. For example, in 2016 the federal Equal Employment Opportunity Commission ("EEOC") convened a public meeting to explore the legal implications of resume-scraping tools, machine learning algorithms, and other big data solutions for federal anti-discrimination law.1 Since that meeting, however, the EEOC has yet to articulate clear guidelines to direct the use of these developing technologies in practice. In addition, challenges to the use of algorithms in sourcing, recruiting, and selecting talent have begun to work their way through the courts. But there have not yet been any landmark legal decisions establishing precedent to

[Page 235]

guide employers as they begin to operate in this space. In light of this uncertainty, employers should proceed cautiously, including by considering the recommendations outlined below.

Broadly speaking, machine learning refers to any set of analytical procedures that identify patterns in data to provide insight and understanding. In the employment context, this broad concept encompasses everything from basic resume-scraping tools, to complex systems that analyze audio, facial images, and verbal and non-verbal responses to video interviews. As these approaches have grown more sophisticated, they have also become more difficult for end users to understand. Indeed, the term "black box" has often been used to refer to the complex and seemingly opaque processes used by a trained algorithm to make recommendations about employment decisions. Such ambiguity makes these approaches susceptible to a perception of a lack of transparency, and even unfairness, increasing the risk of legal challenge. It also means that, if challenged, employers may struggle to defend their tools if they are unable to identify the features used to select from among applicants.

Litigation

In the context of sourcing talent, a little over a year ago, three employers—T-Mobile, Amazon, and Cox Communications—were sued for allegedly discriminating on the basis of age in the way they source potential applicants via Facebook.2 The complaint targets not only those three employers but also an alleged defendant class comprising hundreds of major American employers who have used age restrictions when advertising employment opportunities on Facebook.3 Primarily, the allegations focus on employers who intentionally chose to direct ads to Facebook users within a specific age range (e.g., users aged 18 to 45). But the complaint also sweeps in Facebook's algorithms that may restrict which users see an employer's recruitment ads based on characteristics that are correlated with age. For example, the complaint notes that Facebook's "look-a-like" tool allows employers to upload information about their current workforce, which the tool then uses to identify a target audience of users with similar characteristics.4 The plaintiffs allege that this tool can result in a pool of potential applicants that replicates age-based or other demographic disparities that already

[Page 236]

exist in the employer's workforce.5 Moreover, the plaintiffs assert that Facebook is acting as the employers' agent when it applies this and other algorithmic tools, making the employers responsible for any disparities in users' access to information about employment opportunities that may flow from these tools.6

The parties to the lawsuit have spent the last year in a pitched battle over whether the complaint sufficiently states viable claims, and the court heard arguments on pending motions to dismiss in April of this year. Until the viability of the plaintiffs' claims—including their claims about algorithmic tools—is more settled, employers are well advised to take a close look at their social media recruiting practices. Even if they are not actively limiting recruitment ads on the basis of age or other protected characteristics, they should consider whether existing disparities in representation of protected groups in their workforce are significant enough such that use of look-a-like functions and similar tools might replicate those differences. Likewise, they should consider if other characteristics built into their profile of an ideal candidate could be viewed a proxy for age (e.g., graduation year, or maximum years of work experience) or membership in another demographic group, making an algorithm that capitalizes on those characteristics a source of heightened legal risk.

Uniform Guidelines on Employee...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT