Roombas in the big house? What to do when robots break the law.

AuthorBeato, Greg
PositionWhen Robots Kill: Artificial Intelligence Under Criminal Law - Book review

In 1979, a robot killed a human for the first time. It happened at a Ford facility in Flat Rock, Michigan, in an elaborate five-level structure called a core stacker where 10 robots continuously stored and retrieved large metal castings. Litton Industries, which built the core stacker and the robots that toiled there, described it as an "unattended system." But according to a 1984 Omni feature about the incident, the machines actually required a great deal of intervention in practice--people had to tweak alignments and pick up dropped objects on a regular basis.

But the robots, which glided along rail-like tracks in near silence, continued operating even when fragile, fleshy human beings were nearby. And one day in 1979, one of those machines, which was equipped with sensors that allowed it to "see" some components of the system but apparently not people, rolled up behind Robert Williams and struck his head, killing him. A jury instructed Litton Industries to pay $10 million in damages to Williams' family. Presumably, the robot got off scot-free.

No account of the incident suggests the robot acted with deliberate malice, or even recklessness, but the incident set the stage for future dystopias nonetheless. We had begun to create a new category of machines that were capable of killing us--and unlike, say, cars, guns, or roller coasters, these new machines were deliberately imbued with a degree of autonomy that could potentially make their behavior somewhat unpredictable. That autonomy would only increase over time.

Thirty-six years later, the worldwide robot population has exploded, and the bots are increasingly sophisticated. Their designers have gotten more sophisticated too, and that helps mitigate some of their potential danger. The Litton Industries robots weighed 2,500 pounds and issued no warning noises when they moved. Today's robots boast sensors that help them avoid collisions with humans, they're often built out of light-weight and forgiving materials, and they're often designed to be easy to shut off.

But as artificial intelligence (A.I.) systems--including bots that exist as nothing more than lines of code--become increasingly pervasive and autonomous, it's only natural to assume that their potential for unexpected and unwanted behavior is going to increase too. In short, some robots are going to commit crimes.

Take a recent project by a couple of Swiss artists. They created an automated shopping bot, gave it a budget of $100 in...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT