Machines Without Principals: Liability Rules and Artificial Intelligence

Publication year2021
CitationVol. 89 No. 1

MACHINES WITHOUT PRINCIPALS: LIABILITY RULES AND ARTIFICIAL INTELLIGENCE

David C. Vladeck(fn*)

INTRODUCTION

The idea that humans could, at some point, develop machines that actually "think" for themselves and act autonomously has been embedded in our literature and culture since the beginning of civilization.(fn1) But these ideas were generally thought to be religious expressions-what one scholar describes as an effort to forge our own Gods(fn2)-or pure science fiction. There was one important thread that tied together these visions of a special breed of superhuman men/machines: They invariably were stronger, smarter, and sharper analytically; that is, superior in all respects to humans, except for those traits involving emotional intelligence and empathy. But science fiction writers were of two minds about the capacity of super-smart machines to make life better for humans.

One vision was uncritically Utopian. Intelligent machines, this account goes, would transform and enlighten society by performing the mundane, mind-numbing work that keeps humans from pursuing higher intellectual, spiritual, and artistic callings.(fn3) This view was captured in the popular animated 1960s television show The Jetsons.(fn4) As its title suggests, the show's vision is decidedly futuristic. The main character, George Jetson, lives with his family in a roomy, bright, and lavishly furnished apartment that seems to float in the sky. George and his family travel in a flying saucer-like car that drives itself and folds into a small briefcase. All of the family's domestic needs are taken care of by Rosie, the robotic family maid and housekeeper, who does the household chores and much of the parenting.(fn5) George does "work." He is employed as a "digital index operator" by Spacely's Space Sprockets, which makes high tech equipment. George often complains of overwork, even though he appears to simply push buttons on a computer for three hours a day, three days a week.(fn6) In other words, the Jetsons live the American dream of the future.

In tangible ways, this Utopian vision of the partnership between humans and highly intelligent machines is being realized. Today, supercomputers can beat humans at their own games. IBM's "Deep Blue" can beat the pants off chess grand-masters, while its sister-supercomputer "Watson" can clobber the reigning Jeopardy champions.(fn7) But intelligent machines are more than show. Highly sophisticated robots and other intelligent machines perform critical functions that not long ago were thought to be within the exclusive province of humans. They pilot sophisticated aircraft; perform delicate surgery; study the landscape of Mars; and through smart nanotechnology, microscopic machines may soon deliver targeted medicines to areas within the body that are otherwise unreachable.(fn8) In every one of these examples, machines perform these complex and at times dangerous tasks as well as, if not better than, humans.

But science fiction writers also laid out a darker vision of intelligent machines and feared that, at some point, autonomously thinking machines would turn on humans. Some of the best science fiction expresses this dystopian view, including Stanley Kubrick's 1968 classic film 2001: A Space Odyssey.(fn9) The film's star is not the main character, "Dave" (Dr. David Bowman, played by Keir Dullea), or "Frank" (Dr. Frank Poole, played by Gary Lockwood), who are astronauts on a secret and mysterious mission to Jupiter. Instead, the character who rivets our attention is HAL 9000,(fn10) the all-knowing supercomputer who controls most of the ship's operations, but does so under the nominal command of the astronauts. The complexity of the relationship between man and the super-intelligent machine is revealed early in the film. During a pre-mission interview, HAL claims that he is "foolproof and incapable of error,"(fn11) displaying human-like hubris. And when Dave is asked if HAL has genuine emotions, he replies that HAL appears to, but that the truth is unknown.(fn12)

Once the mission begins, tensions between HAL and the astronauts start to surface. HAL wants the astronauts to tell him the details of the highly secret mission, but Dave and Frank refuse. In fact, they too do not know. Soon thereafter, HAL warns of the impending failure of a critical antenna on the spaceship's exterior. Starting to have doubts about HAL, Dave and Frank lock themselves in an evacuation vehicle to ensure that HAL cannot overhear their conversation; HAL reads their lips through the vehicle's window. Dave and Frank decide to follow HAL's advice and replace the antenna, but in the event that HAL is wrong about the antenna's defect, they agree to shut HAL down. Frank goes on a spacewalk to replace the antenna, and, as he had planned, HAL seizes the moment to kill off the humans. He first severs Frank's oxygen hose and sets him adrift in space. Dave vainly tries to rescue Frank, and as soon as Dave leaves the spacecraft, HAL turns off the life-support system for the three remaining crew members, who were in suspended animation. HAL then refuses to let Dave back onto the spaceship, telling Dave that the plan to deactivate him jeopardizes the mission. Ultimately, Dave makes his way back onto the spaceship and starts shutting HAL down. All the while, as HAL regresses, HAL pleads with Dave to stop, and finally expresses fear of his demise.(fn13)

The question one might ask at this point is what relevance does 2001: A Space Odyssey have to liability rules for autonomous thinking machines? The answer is quite a bit. Today's machines, as path-breaking as they are, all have a common feature that is critical in assessing liability. In each case, the machine functions and makes decisions in ways that can be traced directly back to the design, programming, and knowledge humans embedded in the machine.(fn14) The human hand defines, guides, and ultimately controls the process, either directly or because of the capacity to override the machine and seize control. As sophisticated as these machines are, they are, at most, semi-autonomous. They are tools, albeit remarkably sophisticated tools, used by humans.

Where the hand of human involvement in machine decision-making is so evident, there is no need to reexamine liability rules. Any human (or corporate entity that has the power to do things that humans do, enter into contracts, hire workers, and so forth) that has a role in the development of the machine and helps map out its decision-making is potentially responsible for wrongful acts-negligent or intentional- committed by, or involving, the machine.(fn15) The reason, of course, is that these machines, notwithstanding their sophistication, have no attribute of legal personhood. They are agents or instruments of other entities that have legal capacity as individuals, corporations, or other legal "persons" that may be held accountable under the law for their actions.

But the fully autonomous machines that at some point will be introduced into the marketplace may be quite different, and for that reason, society will need to consider whether existing liability rules will be up to the task of assigning responsibility for any wrongful acts they commit. The first generation of fully autonomous machines-perhaps driver-less cars and fully independent drone aircraft-will have the capacity to act completely autonomously. They will not be tools used by humans; they will be machines deployed by humans that will act independently of direct human instruction, based on information the machine itself acquires and analyzes, and will often make highly consequential decisions in circumstances that may not be anticipated by, let alone directly addressed by, the machine's creators.(fn16) Artificial intelligence theorists distill the concept of full autonomy down to the paradigm of machines that "sense-think-act" without human involvement or intervention.(fn17) And Oxford Professor Nick Bostrom, an eminent futurist, goes as far as to suggest that machines "capable of independent initiative and of making their own plans . . . are perhaps more appropriately viewed as persons than machines."(fn18)

Assuming that this description of the capabilities of such machines is accurate, the key conceptual question that autonomous thinking machines will pose is whether it is fair to think of them as agents of some other individual or entity, or whether the legal system will need to decide liability issues on a basis other than agency. To be sure, it is hard to conceptualize a machine as being anything other than an agent of a person, be it a real person or an entity with legal personhood. But there is another argument that is worth exploring, namely that concepts of agency may be frayed, if not obliterated, by autonomous thinking machines, even those that are not truly "sentient." Let us go back to HAL. At some point before he turns murderous, HAL became an "agent" of no one. An agent who decides to go on his own frolic and detour, defying the instructions of his principal, is no longer an agent under any conventional understanding of the law.(fn19) And HAL plainly detoured. HAL was given the ability to think and act independently, so much so that he "decided" to violate the first rule of robotics: That is, machines must do no harm to humans or to humanity.(fn20) By deciding to harm humans, HAL at least arguably (if not decisively) terminated his status as an agent.

To be...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT