It was almost a year ago when Google employees made waves from Silicon Valley to Washington, D.C., by signing a letter objecting to the company's work with the Defense Department's Project Maven.
The effort--to develop AI systems capable of analyzing reams of full-motion video data collected by drones that would then tip off human analysts when people and events of interest pop up--was viewed by the employees as Google being in the business of war. Eventually, the company chose to not pursue another Project Maven contract.
But the brouhaha may have been mitigated if the Pentagon and Silicon Valley knew how to better communicate, said Paul Scharre, director of the Center for a New American Security's technology and national security program.
"There's not a lot of crosstalk and crosspollination between these communities--between policymakers and those in the AI community who are concerned," said Scharre, who is also the author of the book Army of None: Autonomous Weapons and the Future of War.
To try and bridge the gap, CNAS is spearheading a new effort--known as the Project on Artificial Intelligence and International Stability--to create more dialogue between policymakers, the developers of AI platforms and national security experts working outside of government.
"You need perspectives from all three to really grapple with... [these issues] effectively," he said.
The project will focus not only on the use of military applications for AI, but also on how other countries are employing the technology, he said.
"Countries around the globe [are] making very clear their intent to harness artificial intelligence to make their countries... stronger, to increase national and economic competitiveness," he said. "We've seen well over a dozen countries now launch some form of national strategy for artificial intelligence."
China and Russia have trumpeted the fact that they intend to make AI a cornerstone of their future scientific pursuits and become world leaders in the technology.
One of the main purposes of the project is to better understand the risks associated with nations beginning to invest in AI technology and the security dilemmas that the United States should be concerned about. It will bring together three communities--policymakers, those in the AI community and security studies scholars--to better understand the problem, he said.
Scharre, who has researched artificial intelligence for years, said in the course of the work he has done, he...