Let slip the robots of war: lethal autonomous weapon systems might be more moral than human soldiers.

AuthorBailey, Ronald
PositionColumns - Column

Lethal autonomous weapons systems that can select and engage targets do not yet exist, but they are being developed. Are the ethical and legal problems that such "killer robots" pose so fraught that their development must be banned?

Human Rights Watch thinks so. In its 2012 report, Losing Humanity: The Case Against Killer Robots, the activist group demanded that the nations of the world "prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument." Similarly, the robotics and ethics specialists who founded the International Committee on Robot Arms Control want "a legally binding treaty to prohibit the development, testing, production and use of autonomous weapon systems in all circumstances." Several international organizations have launched a global Campaign to Stop Killer Robots and a multilateral meeting under the Convention on Certain Conventional Weapons was held in Geneva, Switzerland, last year to debate the technical, ethical, and legal implications of autonomous weapons. "We are concerned," meeting's organizers say in their Call to Action, "about weapons that operate on their own without human supervision. The campaign seeks to prohibit taking the human 'out-of-the-loop' with respect to targeting and attack decisions on the batdefield." A follow-up meeting is scheduled for April 2015.

At first blush, it might seem only sensible to ban remorseless automated killing machines. Who wants to encounter the Terminator on the battlefield? Proponents of a ban offer four big arguments. The first is that it is morally wrong to delegate life-and-death decisions to machines. The second is that it will simply be impossible to instill fundamental legal and ethical principles into machines in such a way as to comply adequately with the laws of war. The third is that autonomous weapons cannot be held morally accountable for their actions. And the fourth is that, since deploying killer robots removes human soldiers from risk and reduces harm to civilians, they make war more likely.

To these objections, law professors Kenneth Anderson of American University and Matthew Waxman of Columbia University respond that an outright ban "trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it."

Choosing whether to kill...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT