EEOC teams with other agencies to monitor how artificial intelligence perpetuates discrimination.

The EEOC has joined forces with the Consumer Financial Protection Bureau, the Federal Trade Commission and the Justice Department's Civil Rights Division to step up oversight of how artificial intelligence software might perpetuate unlawful discrimination.

A joint statement issued in April by the four agencies said that although AI can be a valuable business tool, its use "also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes."

The statement uses the broad term "automated systems" to mean "software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions."

HR professionals are flocking to AI platforms such as ChatGPT--introduced last November--to automate processes ranging from recruitment to performance management to benefits-plan design.

"While these tools can be useful," the document said, "they also have the potential to produce outcomes that result in unlawful discrimination." It said bias can arise because of problems involving:

Data and datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias or datasets that contain other types of errors. AI systems can correlate data with protected classes, which can lead to discriminatory outcomes.

"Opacity" and access: Many AI systems are "black boxes" whose...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT