Using AI in hiring? Beware the legal threats.

Artificial intelligence sounds like it could be hiring's Holy Grail: A completely automated system that maximizes application-sorting efficiency, minimizes HR labor and reduces the chance that discrimination could taint the hiring process.

But despite its great potential, AI carries liability risks that HR pros must understand.

AI software relies on algorithms to sort data and analyze it quickly, without human input. Many HR departments now use AI programs to sift through resumes and applications to identify key words and phrases. Applications that meet an employer's screening criteria are forwarded to HR for further review. The rest are rejected.

In addition to efficiency, software sellers tout AI's ability to bypass the subjective whims (and biases) that humans can introduce into the "who to interview" decision.

That's a dangerous assumption. While AI may guard against direct discriminatory decisions, it can still deliver results that have a disparate impact on protected groups of applicants.

For example, AI systems can create a closed-loop system that perpetuates latent bias. That is, if the job criteria are based on certain assumptions--for example, that many years of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT