MAKING ARTIFICIAL INTELLIGENCE INTELLIGIBLE: Humans need to know how neural networks make decisions.

AuthorJahnke, Art
PositionSCIENCE & TECHNOLOGY

THE IRONY certainly is not lost on Kate Saenko. Now that humans have programmed computers to learn, they want to know exactly what the computers have learned, and how they make decisions after their learning process is complete. To do that, Saenko, professor of computer science at Boston University, used humans--asking them to look at dozens of pictures depicting steps that the computer may have taken on its road to a decision, and identify its most likely path.

Those experiments worked well. The humans gave Saenko answers that made sense, but there was a problem: they made sense to humans, and humans, Saenko realizes, have biases. In fact, humans do not even understand how they themselves make decisions. How in the world then could they figure out how a neural network, with millions of neurons and billions of connections, makes decisions?

So, Saenko did a second experiment, using computers instead of people to help determine exactly what learning machines learned. "What we learned that's really important is that, despite the extreme complexity of these algorithms, it's possible to peek under the hood and understand their decisionmaking process, and that we can actually ask humans to explain it to us," says Saenko. "So, we think it's possible to teach humans how machines make predictions."

Computer scientists know in general terms how neural networks develop. After all, they write the training programs that direct a computer's so-called neurons to connect to other neurons, which actually are mathematical functions. Each neuron parses one piece of information, and every neuron builds on the information in the preceding nodes. Over time, connections evolve. They go from random to revealing, and the network "learns" to do things like identify enemy stations in satellite images or spot evidence of cancer long before it is visible to a human radiologist. They identify faces. They drive cars.

That is the good news. The disconcerting news, indicates Saenko, is that, as artificial intelligence (AI) plays an increasingly important role in the lives of humans, its learning processes are becoming more and more obscure. Just when we really need to trust it, it has become inscrutable. That is a problem.

"The more we rely on artificial intelligence systems to make decisions, like autonomously driving cars, filtering newsfeed, or diagnosing disease, the more critical it is that the AI systems can be held accountable," explains Stan Sclaroff, professor of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT