Artificial Stupidity: Computers aren't bigoted--they're just based on cold calculations, right?

Author:Granados, Luis
Position:UP FRONT
 
FREE EXCERPT

The past two years have featured a steady drumbeat of problems with various artificial intelligence (AI) procedures, centered around a common theme: they produce the same kind of inaccurate bias and discrimination on racial, gender, ethnic, and other grounds that humans do when they're not careful. Or worse.

More and more courts are using an AI product called COMPAS to predict whether persons convicted of crimes will offend again, and judges are relying on it in handing out sentences. But a thorough study by a third party of actual outcomes over time revealed that COMPAS was heavily infected with racial bias. The errors it made in predicting that people would re-offend, when in fact they did not, were heavily skewed toward black people; the opposite errors, when people ended up back in court after the program predicted they wouldn't, skewed heavily toward white folks.

Northpointe, the company that sells COMPAS, responded by saying that race wasn't even one of the variables their AI looked at. Maybe not--but names were. Addresses were. Schools were. Something caused COMPAS to err in ways that dramatically favored white people over black people.

Part of the underlying problem here is that Northpointe won't reveal how its algorithm works. That's not unusual--most AI firms have similar policies. A deeper problem, though, is that Northpointe itself may not even know how its algorithm works. Many of the most advanced AI procedures are essentially "black boxes" that make no effort to explain how they reach their conclusions, with a decision-making process that cannot be retraced.

AI also suffers from the "GIGO" syndrome--Garbage In, Garbage Out. Russian scientists with nothing better to do conducted an online AI beauty contest in 2016, with thousands of entrants from around the world submitting selfies to be judged against data sets measuring factors like facial symmetry to select the most beautiful faces. When the results were announced, forty-three of the forty-four winners had light skin. Why? Because when the scientists went back over their process, they discovered that the data sets contained only a small proportion of darker-skinned models. The machine therefore concluded that dark skin was an aberration to be dismissed.

In July, IBM proudly announced a new performance review program for its 380,000 employees worldwide that's based on its "Watson" AI product. The company boasts that it won't just measure how well each employee has done in...

To continue reading

FREE SIGN UP