AI: A Tool For Good and Bad.

AuthorTadjdeh, Yasmin
PositionAlgorithmic Warfare

With promises of crunching mounds of data into bite-sized nuggets of actionable information, machine learning could be a breakthrough for the intelligence community. However, vulnerabilities within such systems could open them up to cyber attacks.

Jason Matheny, director of the Intelligence Advanced Research Projects Activity, said his organization funds research at over 500 universities, colleges, businesses and labs. A third of his portfolio focuses on machine learning, speech recognition and video analytics.

"For us, machine learning is an approach to dealing with this deluge of data that the intelligence community is confronted with," he said during a panel discussion at a Defense One event focusing on artificial intelligence. Machine learning is a subset of artificial intelligence.

However, despite these promises, the community has become anxious about potential vulnerabilities lurking within the systems, he said.

"Right now, what we receive in the intelligence community is usually too brittle," Matheny said. "It's too insecure for us to deploy."

For example, most image classifiers can be spoofed in less than an hour by a college student, he said.

"A favorite parlor trick now of computer science undergrads is fooling the state-of-the-art image classifier to think that this school bus picture is actually a picture of an ostrich," he said.

Other vulnerabilities include "data poisoning" attacks where a small amount of information is mislabeled so that it confuses the classifier. Another is called "model inversion," which takes a classifier and manipulates its training data.

"Most of the commonly used machine learning systems are vulnerable to these kind of attacks," he said. "We as a community need to... become much more careful in the way we develop machine learning systems that are defensive against various kinds of adversarial attacks."

Red teams--which attempt to find and document holes in information technology systems--are becoming prevalent in the cybersecurity community, Matheny said. That same approach "is now sorely needed in the machine learning community."

IARPA is working on research projects that target the issue. "One is developing classifiers that are robust to various kinds of adversarial inputs," he said. Another is "understanding the different failure modes--the ways in which classifiers can be attacked, creating... ensemble approaches so that you can fool one classifier some of the time, but you can't fool all of the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT