How AI Could Go Disastrously Wrong.

AuthorTadjdeh, Yasmin
PositionAlgorithmic Warfare

As researchers and engineers race to develop new artificial intelligence systems for the U.S. military, they must consider how the technology could lead to accidents with catastrophic consequences.

In a startling, but fictitious, scenario, analysts at the Center for Security and Emerging Technology--which is part of Georgetown University's Walsh School of Foreign Service--lay out a potential doomsday storyline with phantom missile launches.

In the scenario, U.S. Strategic Command relies on a new missile defense system's algorithms to detect attacks from adversaries. The system can quickly and autonomously trigger an interceptor to" shoot down enemy missiles which might be armed with nuclear warheads.

"One day, unusual atmospheric conditions over the Bering Strait create an unusual glare on the horizon," the report imagined. The missile defense system's "visual processing algorithms interpret the glare as a series of missile launches, and the system fires interceptors in response. As the interceptors reach the stratosphere, China's early-warning radar picks them up. Believing they are under attack, Chinese commanders order a retaliatory strike."

Doomsday examples such as this illustrate the importance of getting artificial intelligence right, according to the report, "AI Accidents: An Emerging Threat--What Could Happen and What to Do."

While AI accidents in other sectors outside of the Defense Department could certainly be catastrophic--say, with power grids--the military is a particularly high-risk area, said Helen Toner, co-author of the report and CSET's director of strategy.

"The chance of failure is higher and obviously when you have weaponry involved, that's always going to up the stakes," she said during an interview.

AI failures usually fit into three different categories, according to the report. These include robustness, specification and assurance.

Failures of robustness occur when a system receives abnormal or unexpected inputs that cause a malfunction, the report said. Failures of specification happen when a system attempts "to achieve something subtly different from what the designer or operator intended, leading to unexpected behaviors or side effects." Failures of assurance occur when a system cannot be adequately monitored or controlled during operation.

While the military could face any of those types of accidents, Toner noted that robustness is a top concern.

"All of them come into play," she said. "Robustness is an...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT