Current and former Pentagon officials are worried that Google's withdrawal from Project Maven--one of the Defense Department's most high-profile artificial intelligence efforts--could have disastrous effects.
The goal of Maven is to develop AI systems that can analyze reams of full-motion video data collected by drones and tip off human analysts when objects of interest pop up.
Over the past several months, the program--which included partnerships with Silicon Valley companies--has been met with vocal opposition from some in the commercial sector, particularly from Google employees.
In April, more than 3,000 Google workers signed a letter stating that the company "should not be in the business of war" and should sever ties with Project Maven.
In May, Google Cloud CEO Diane Greene announced internally that the company would not seek another contract associated with the effort, news reports said.
That move, however, creates an "enormous moral hazard" for the company, said Robert O. Work, former deputy secretary of defense, during a panel discussion at a recent conference.
"They say, 'Look, this data could potentially down the line at some point cause harm to human life,'" he said. "But it might save 500 Americans, or 500 allies or 500 innocent civilians from being attacked. So I really believe that Google employees are creating a moral hazard for themselves."
When Project Maven was stood up, it was meant to be a pathfinder to demonstrate how the Defense Department could better use artificial intelligence and machine learning, he said.
"We picked what we considered to be the absolute least objectionable thing, and that is using computer vision and teaching AI to look for things on video," he said.
Using a sensor called Gorgon Stare, a drone can fly over a city and collect massive amounts of video, he noted. However, even with three seven-person teams working constantly, the Pentagon was only able to analyze 15 percent of the data. "The other 85 percent of the tape was on the floor," he said.
Artificial intelligence programs, however, could prompt analysts when objects of interest appear, he noted.
Work did not downplay the possibility that it could result in a military strike. "I fully agree that it might end up with us taking a shot," he said. "But it can easily save lives."
Work also noted that Google has an AI center based in China, which is cause for alarm.
"In China, they have a concept called military-civil fusion. Anything that's going...