PENTAGON GRAPPLING WITH AI'S ETHICAL CHALLENGES.

AuthorHarper, Jon

Artificial intelligence is a top modernization priority for the U.S. military, with officials envisioning a wide range of applications, from back office functions to tactical warfighting scenarios. But the Pentagon faces the daunting challenge of working through a plethora of ethical issues surrounding the technology while staying ahead of advanced adversaries who are pursuing their own capabilities.

Developers are making strides in AI, adding urgency to the department's efforts to craft new policies for the ethical deployment of the capabilities. In August, heads were turned when an AI agent defeated a seasoned F-16 fighter pilot in a series of simulated combat engagements during the final round of the Defense Advanced Research Projects Agency's "Alpha Dogfight" Trials. The agent, developed by Heron Systems, went undefeated with a record of 5-0 against the airman whose call sign was "Banger."

"It's a significant moment," said Peter W. Singer, a strategist and senior fellow at the New America think tank, comparing it to chess master Garry

Kasparov losing to IBM's Deep Blue computer at the complex game.

During the simulated dogfight "the AI shifted [its tactics] and it kept grinding away in different ways at him" until it won, noted Singer, co-author of Ghost Fleet and Burn-In, which examine the military and societal implications of autonomy and artificial intelligence.

Although keen to exploit the benefits of emerging AI capabilities, senior defense officials have repeatedly emphasized the need to adhere to laws and values while mitigating risks.

The challenge is "as much about proving the safety of being able to do it, then the capability of being able to do it," Assistant Secretary of Defense for Acquisition and Sustainment Kevin Fahey said at the National Defense Industrial Association's Special Operations/Low-Intensity Conflict conference. "We struggled with it policy-wise as much as anything."

While the emergence of new technologies often introduces new legal and ethical questions, analysts say artificial intelligence poses unique challenges.

"This is a technology that is increasingly intelligent, ever-changing and increasingly autonomous, doing more and more on its own," Singer said. "That means that we have two kinds of legal and ethical questions that we've really never wrestled with before. The first is machine permissibility. What is the tool allowed to do on its own? The second is machine accountability. Who takes responsibility... for what the tool does on its own?"

Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security and the author of Army of None: Autonomous Weapons and the Future of War, said the laws of armed conflict have long been baked into how the Pentagon incorporates new technology. But artificial intelligence isn't like standard weapon systems, and it requires more oversight.

"What I think you've seen DoD do, which I think is the right step, is say, AI seems to have something different about it,'" Scharre said. "Because of how it changes the relationship with humans and human responsibility for activity, because of some of the features of the technology today and concerns about ... reliability and robustness, we need to pay more attention to AI than we might normally...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT