Machines Without Principals: Liability Rules and Artificial Intelligence
Publication year | 2021 |
Citation | Vol. 89 No. 1 |
INTRODUCTION
The idea that humans could, at some point, develop machines that actually "think" for themselves and act autonomously has been embedded in our literature and culture since the beginning of civilization.(fn1) But these ideas were generally thought to be religious expressions-what one scholar describes as an effort to forge our own Gods(fn2)-or pure science fiction. There was one important thread that tied together these visions of a special breed of superhuman men/machines: They invariably were stronger, smarter, and sharper analytically; that is, superior in all respects to humans, except for those traits involving emotional intelligence and empathy. But science fiction writers were of two minds about the capacity of super-smart machines to make life better for humans.
One vision was uncritically Utopian. Intelligent machines, this account goes, would transform and enlighten society by performing the mundane, mind-numbing work that keeps humans from pursuing higher intellectual, spiritual, and artistic callings.(fn3) This view was captured in the popular animated 1960s television show
In tangible ways, this Utopian vision of the partnership between humans and highly intelligent machines is being realized. Today, supercomputers can beat humans at their own games. IBM's "Deep Blue" can beat the pants off chess grand-masters, while its sister-supercomputer "Watson" can clobber the reigning
But science fiction writers also laid out a darker vision of intelligent machines and feared that, at some point, autonomously thinking machines would turn on humans. Some of the best science fiction expresses this dystopian view, including Stanley Kubrick's 1968 classic film
Once the mission begins, tensions between HAL and the astronauts start to surface. HAL wants the astronauts to tell him the details of the highly secret mission, but Dave and Frank refuse. In fact, they too do not know. Soon thereafter, HAL warns of the impending failure of a critical antenna on the spaceship's exterior. Starting to have doubts about HAL, Dave and Frank lock themselves in an evacuation vehicle to ensure that HAL cannot overhear their conversation; HAL reads their lips through the vehicle's window. Dave and Frank decide to follow HAL's advice and replace the antenna, but in the event that HAL is wrong about the antenna's defect, they agree to shut HAL down. Frank goes on a spacewalk to replace the antenna, and, as he had planned, HAL seizes the moment to kill off the humans. He first severs Frank's oxygen hose and sets him adrift in space. Dave vainly tries to rescue Frank, and as soon as Dave leaves the spacecraft, HAL turns off the life-support system for the three remaining crew members, who were in suspended animation. HAL then refuses to let Dave back onto the spaceship, telling Dave that the plan to deactivate him jeopardizes the mission. Ultimately, Dave makes his way back onto the spaceship and starts shutting HAL down. All the while, as HAL regresses, HAL pleads with Dave to stop, and finally expresses fear of his demise.(fn13)
The question one might ask at this point is what relevance does
Where the hand of human involvement in machine decision-making is so evident, there is no need to reexamine liability rules. Any human (or corporate entity that has the power to do things that humans do, enter into contracts, hire workers, and so forth) that has a role in the development of the machine and helps map out its decision-making is potentially responsible for wrongful acts-negligent or intentional- committed by, or involving, the machine.(fn15) The reason, of course, is that these machines, notwithstanding their sophistication, have no attribute of legal personhood. They are agents or instruments of other entities that have legal capacity as individuals, corporations, or other legal "persons" that may be held accountable under the law for their actions.
But the fully autonomous machines that at some point will be introduced into the marketplace may be quite different, and for that reason, society will need to consider whether existing liability rules will be up to the task of assigning responsibility for any wrongful acts they commit. The first generation of fully autonomous machines-perhaps driver-less cars and fully independent drone aircraft-will have the capacity to act completely autonomously. They will not be tools
Assuming that this description of the capabilities of such machines is accurate, the key conceptual question that autonomous thinking machines will pose is whether it is fair to think of them as agents of some other individual or entity, or whether the legal system will need to decide liability issues on a basis other than agency. To be sure, it is hard to conceptualize a machine as being anything other than an agent of a person, be it a real person or an entity with legal personhood. But there is another argument that is worth exploring, namely that concepts of agency may be frayed, if not obliterated, by autonomous thinking machines, even those that are not truly "sentient." Let us go back to HAL. At some point before he turns murderous, HAL became an "agent" of no one. An agent who decides to go on his own frolic and detour, defying the instructions of his principal, is no longer an agent under any conventional understanding of the law.(fn19) And HAL plainly detoured. HAL was given the ability to think and act independently, so much so that he "decided" to violate the first rule of robotics: That is, machines must do no harm to humans or to humanity.(fn20) By deciding to harm humans, HAL at least arguably (if not decisively) terminated his status as an agent.
To be...
To continue reading
Request your trial