Robo-morality: can philosophers program ethical codes into robots?

Author:Myers, Joshua
Position:Up Front
 
FREE EXCERPT

The science fiction canon is filled with stories of robots rising up and destroying their human masters. From its beginnings in Frankenstein up through the stories of Isaac Asimov and Philip K. Dick, to The Terminator and The Matrix and beyond, popular culture is filled with a fear of humanity's hubristic creations. The more intelligent they are, the scarier they become.

This is one reason why the Office of Naval Research (ONR) grant of $7.5 million to university researchers at Tufts, Rensselaer Polytechnic Institute, Brown, Yale, and Georgetown to build ethical decision-making into robots is simultaneously comforting and eerie. The goal of this interdisciplinary research--carried out by specialists in Artificial Intelligence (AI), computer science, cognitive science, robotics, and philosophy--is to have a prototype of a moral machine within the next five years.

The questions of whether or not a machine has moral agency or can exhibit intelligence are interesting, albeit esoteric, topics for philosophers to ruminate over. The aim of this research is not to puzzle over abstract conundrums but rather to find out the considerations that normal humans take into account when making moral decisions, and then to implement these considerations into machines. As Steven Omohundro, a leading AI researcher, points out in a May 13 article at the news site Defense One, "with drones, missile defense, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions." The military prohibits armed systems that are fully autonomous. However, military technology is getting more and more sophisticated and in scenarios where lives are at stake, machines that are capable of weighing moral factors will be important.

Matthias Scheutz, a researcher at Tufts who will take the lead on this research, gives the example of a robot medic en route to deliver supplies to a hospital. On the way, the robot encounters a wounded soldier who needs immediate assistance. Should the robot abort the mission in order to save the soldier? Modern robots cannot weigh factors like the level of pain the wounded soldier is experiencing, the level of importance of its current mission, or the moral worth of saving a life, they merely carry out what they were programmed to do. However, a robot that had a moral decision-making system would be able to weigh factors like these to make a moral, rational decision.

The applications of this research are not...

To continue reading

FREE SIGN UP