Author:Cowger, Alfred R., Jr.
  1. Introduction

    Autonomous vehicles that drive themselves independent of their human occupants are no longer the stuff of science fiction. (2) Semi-autonomous vehicles, i.e. those that use computers and sensors to detect and avoid potential collisions, are already on the road now and advertised by major car makers. As of 2013, various programs and tests related to autonomous vehicles were being run at 85 Asian locations, 159 European locations, 7 Oceana locations and 149 locations in North America, including in 22 different states. (3) Some experts believe that fully automated vehicles will be common on the roadways shortly after 2020, with 30% of all automobiles in operation being fully automated by 2025, and up to 70% of all vehicles being fully automated by2035. (4)

    To be fully automated, a vehicle will have a set of sensors which would will detect and monitor other vehicles, and receive input about traffic flow and volume, road conditions and even weather conditions. (5) Autonomous vehicles will also share data with each other, "learning" from each other about factors which might affect that vehicle's trip. (6) An autonomous vehicle will also have an on-board computer, via the use of algorithms, which will maintain all aspects of the vehicle's operation, such as navigating the best route to the intended destination, or directing that vehicle's responses to hazards encountered during the trip. (7)

    However, the software developed by manufacturers of these vehicles will not pre-define a vehicle's response to those hazards. (8) Rather, the software for autonomous vehicles will be given an ultimate goal by the manufacturer, such as (using an oversimplified example) "determine best response to impending collision," and the vehicle will decide via algorithms what the best response to a given situation will be. (9) Moreover, the computer will start learning from its environment the moment that vehicle leaves the sales lot, constantly running scenarios or experiments to determine possible outcomes based on the factors to which the vehicle is exposed. (10) Based on its own analysis of data and outcomes, the computer's algorithm, and thus the vehicle's response, will continually change with a unique response to any situation the vehicle might encounter. (11) The vehicle will also have the capacity to "learn" from other vehicles, and its algorithm will change to incorporate their responses to road situations. (12) As a result, the original manufacturer of an autonomous vehicle will not be able to predict at the time of manufacture the means by which an algorithm will make a decision. (13)

    The decisions made by an autonomous vehicle in the face of an unavoidable collision will result in questions of liability that courts and legislatures have not heretofore faced. (14) In a hypothetical scenario, a vehicle driving around the sharp curve of a busy four-lane thoroughfare will confront an elderly lady jaywalking in the vehicle's lane. (15) The vehicle will sense that braking is not an option because the vehicle is going too fast. (16) The vehicle would have to choose among four possible outcomes: 1) hit the elderly lady; 2) cross over to the next lane, thereby crashing and striking a van of young Muslim doctors; 3) cross into oncoming traffic, thereby colliding with a school bus full of elementary children, or 4) run off the road and over a cliff, thereby avoiding the pedestrian and all traffic, but certainly resulting in the serious injury or death of the driver. (17) Any decision will result in grievous harm to someone. (18) The question for lawyers and a court will be who should be liable for the decision process that resulted in that harm. (19)

    If the hypothetical collision were the result of an outright defect in the vehicle or the software running the car, contemporary product liability law and legislation like the Uniform Commercial Code already provides the framework to analyze liability and damage issues. (20) Likewise, if the vehicle or software could have been designed to avoid the grievous harm or was below industry standards that would, if met have avoided the harm, legal standards already exist for determining the duty of care expected of the vehicle manufacturer to design a better vehicle. (21)

    However, what should be the legal response if the algorithm is working exactly as intended, i.e. the vehicle had to be operated such that harm to someone was inevitable? (22) on what basis of liability should a manufacturer be liable because someone was intentionally, albeit correctly, harmed by that manufacturer's product? (23) What if others, whether it be the vehicle owner, a jury, or society in general, argue a different victim should have been chosen, based on economic factors, moral judgments, or even bigoted precepts. (24) Does the chosen victim have any grounds for recovery of damages caused by the intentional decision of the vehicle? (25) As will be discussed herein, the current framework of tort law simply cannot address these questions, because that framework does not work for products that are meant to change once they leave the manufacturer's control, and are not operated by humans, thus precluding any analysis based on a human-based duty of care. (26)

    The most obvious answer is that algorithms will have to be designed to make decisions that are arguably always correct decisions. (27) The most obvious example might be vehicles which are programmed to always save the life of the vehicle's occupant, even if that means crashing into and possibly killing others. (28) Going a step further, what if the algorithm is programmed to make a decision based purely on the subjective desires of the vehicle's owner? (29) With sufficient sensors and computing capacity, a vehicle could determine at least the owner of the other vehicles in the hypothetical accident. (30) For example, a sensor might read the license plates of the van of Muslim doctors and, based on the name of the registered owner, conclude that one of the other vehicle's owners was from an ethnic group the driver hated, and thus that vehicle should face the most harmful outcome. (31) In either example, the algorithm will be working perfectly when it choices the victim. (32) But is introducing any bias into a decision otherwise based on an objective algorithm a wise choice, and if so, who should decide what is a "good" bias? (33)

    Finally, it should be noted that this article focuses on vehicles driven on roadways. (34) However, other types of vehicles will be autonomous, ranging from heavy equipment at construction sites, to equipment used in mining and drilling operations, to ocean-going vehicles far from land. (35) Thus, these issues will be wide-ranging, reaching far beyond the U.S highway system. (36) That, in turn, means that the question of liability for the harm to the chosen victims of autonomous vehicles will become a frequent question in the near future across many societies, each with their own value systems, further complicating the answer to the question of liability for the decisions of autonomous vehicles. (37)

  2. Liability for Collisions Resulting from Objective Algorithms: Why Traditional Legal Theories Do Not Work

    Any analysis of liability for autonomous vehicle decisions should first start with the current legal theories that would apply to any tort resulting in personal injury. (38) Thus, the lawyer could assert the operation of an autonomous vehicle is an ultrahazardous activity. (39) The lawyer most certainly move beyond this almost archaic legal theory, and assert liability on the basis of product liability law. (40) The lawyer could argue that the algorithm's decision would be a breach of an express or implied warranty under the Uniform Commercial Code. (41) Finally, the lawyer might claim that an autonomous vehicle's manufacturer must be liable based on either the traditional law of negligence or the more modern law of strict liability for the harm caused by the algorithm's conclusions. (42) Since none of these theories are applicable to a product that is meant to change after it is manufactured, and is operated without human involvement, none of these existing theories can actually provide a basis for recovery by the accident victim. (43)

    A The Operation of an Autonomous Vehicle as an Ultrahazardous Activity?

    The first ground, ultrahazardous activity, would create an illogical paradox if applied to the algorithm. (44) An ultrahazardous activity is one that poses a high risk of harm despite the reasonable efforts of a party to reduce risks, and creates a high risk of injury when harm occurs. (45) Those undertaking ultrahazardous activities become strictly liable for all damages arising from ultrahazardous activity, without any need for determining duty or fault. (46) The ultrahazardous activity doctrine might seem to be appropriate to autonomous vehicles because the risk is inevitable even for the best-engineered autonomous vehicle, and that risk could often be fatal. (47)

    However, the ultrahazardous activity doctrine will not apply to autonomous vehicles for two reasons. (48) First, an ultrahazardous activity is one that is uncommon, and in fact the moniker has more recently been changed to "abnormally hazardous" to reflect this element of uncommonality. (49) With autonomous vehicles eventually the primary vehicles on the roadway, decisions made by algorithms in the event of collisions will be daily occurrences, and so in no way "abnormal". (50) Second, and more importantly, an autonomous vehicle's design makes it safer than one driven by a human, since the algorithm controlling the car can act more quickly and more correctly than any human. (51) it is estimated that accident rates could plummet ninety percent as autonomous vehicles become prevalent. (52) it would be illogical to hold that a product that results in such dramatic decreases in accidents could be deemed "hazardous," let alone abnormally hazardous. (53)


To continue reading