May it Please the Algorithm[1] the Future of A.i. in the Practice of Law

Publication year2020
Pages36
CitationVol. 89 No. 1 Pg. 36
May It Please the Algorithm[1] The Future of A.I. In the Practice of Law
89 J. Kan. Bar Assn 1, 36 (2020)
Kansas Bar Journal
January, 2020

By Bob Lambrechts

Technology is rapidly advancing us to a crucial juncture in humanity’s relationship with the law. In future disputes, machines may make life-and-death decisions all on their own. In 1970, the film “Colossus: The Forbin Project” brought to the theaters Dr. Charles A. Forbin’s creation—a super computer designed to oversee and control America’s huge military defense system. Not only will it control the nation’s nuclear tipped missiles but it has limitless potential due to the sentience and artificial intelligence that Forbin embedded within the system. After launching a nuclear tipped missile at a Soviet oil field to convey a lethal lesson, Colossus tells Forbin that the world, under its absolute control, is now freed from war.

Litigation, an only marginally less so form of Armageddon, has rampant inefficiencies that make it nearly impossible to obtain an expeditious, on the merits resolution of even the most straightforward lawsuits. One commentator has noted:

The recourse to legal actors and proceedings is costly, emotionally debilitating, and potentially counterproductive. The adversary system can be a hugely inefficient means of uncovering facts; its relentless formalities and ceaseless opportunities for splitting hairs are time consuming and expensive.[2]

Litigation is inefficient because it is ponderous and labored, which means that a client’s financial and personnel resources can be redirected for extended periods of time. A large pending case can require a client to set aside substantial financial resources to address the litigation, to preserve numerous documents and utilize electronic search tools to scan a staggering number of company documents. Moreover, the client will have to assign personnel to address the demands of the litigation instead of having them focus on their revenue-generating roles within the company.

Whatever your attitude toward artificial intelligence, lawyers should count on the idea that technology will continue to change our profession and particularly how we practice litigation. It is difficult to think of a single area of modern technology that has not penetrated the law firm. Even if some lawyers fought against facsimile machines and computers at first, technology has made society more efficient and ironically may have increased the volume of legal services as lawyers began drafting their own documents and sending their own correspondence by electronic mail. The idea that a solo practitioner could function without an accountant and legal assistant would have been unthinkable fifty years ago, but can be standard procedure today.

This article will discuss the expanding role of artificial intelligence in the legal profession and the current and future roles of artificial intelligence in our legal system. I draw comparisons with the U.S. military’s analysis of the role of artificial intelligence in war fighting. The basis for that comparison is that the Department of Defense is at the cutting edge of the development of this technology and highly ethical human judgment is critical to a morally acceptable outcome—just as with the legal profession. In addition, this article will discuss the prospect for using artificial intelligence in what might be considered the most hallowed roles within our legal system, that of the judge and jury, in their roles as the arbiters of justice.

Artificial Intelligence – Background and Projections

John McCarthy, a professor of computer science at Stanford, first conceived the term “artificial intelligence” in 1955.[3] In 1956, McCarthy invited a group of researchers from a multitude of disciplines, including language simulation, neural networks and complexity theory, to a summer workshop, the Dartmouth Summer Research Project on Artificial Intelligence, to discuss what would ultimately become the field of artificial intelligence.[4] It was evident many decades ago that electronic capacity and functionality were doubling approximately every eighteen months,[5] and the rate of improvement showed no signs of slowing down. In fact, experts predict that spending on artificial intelligence by companies will grow from $37.5 billion in 2019 to nearly $98 billion in 2023, a compound annual growth rate of 28.4 percent during the period between 2018 and 2023.[6]

The Dartmouth conference was one of the first serious attempts to consider the consequences of this exponential curve. Many attendees came away from the conference convinced that continued advancements in electronic speed, capacity, and software programming would lead to the point where computers would someday have the resources to be as intelligent as human beings.[7]

Artificial intelligence is the science and engineering of making intelligent machines. It is not a single technology but is comprised of related and often-connected technologies that work together to supply “human-like” responses and reasoning. Also referred to as “cognitive technologies,” artificial intelligence comprises, among other things, the technologies of deep learning, natural language processing, machine vision, speech recognition and expert systems.[8] Among these, deep learning is the most trans formative and is the core of what is considered modern artificial intelligence. Deep learning utilizes neural networks, a computer system modeled after the human brain and nervous system, that learn from large amounts of data.[9] This is akin to how we learn from experience. The deep learning algorithm would perform a task repeatedly, each time tweaking it a little to improve the outcome.[10] We refer to “deep learning” because the neural networks have various (deep) layers that enable learning. “Just about any problem that requires ‘thought’ to figure out is a problem deep learning can learn to solve.”[11]

It is estimated that every day we generate a mind-boggling 2.5 quintillion bytes with 90 percent of all data today created in the last two years.[12] Since deep-learning algorithms require enormous amounts of data to learn from, this increase in data creation is one reason that deep learning capabilities have grown in recent years. In addition to more data creation, deep learning algorithms benefit from the more robust computing power that is available today. It is computing capacity that makes deep learning possible. The typical human brain is made of an estimated 86 billion interconnected brain cells, or neurons.[13] To provide a sense of the advances that are being made in this area, in the summer of 2019 Intel made considerable progress toward a digital equivalent of the human brain by building a computer system with 8 million digital neurons and has the goal of reaching 100 million by late 2019.[14]

Artificial neural networks seek to simulate these biological networks and get computers to act like interconnected brain cells, so that they can learn and make decisions in a more humanlike manner.[15] Discrete areas of the human brain process information differently, and these parts of the brain are arranged in a hierarchical fashion.[16] As information enters the brain, “each level of neurons processes the information, provides insight and passes the information to the next, more senior layer.”[17]

With deep learning, the computer trains itself to process and learn from data. According to Ray Kurtzweil, an American inventor, futurist and director of engineering at Google, by 2045, computers utilizing artificial intelligence will surpass human intelligence.[18] He describes uploading as a process of “scanning all of the salient details (of a human brain) and then reinstantiating those details into a suitably powerful computational substrate.[19] This process would capture a person’s entire personality, memory, skills and history.”[20]

Deep learning is a method for software to learn by trial and error at a pace limited only by computer processing power and cloud storage.[21] Using unstructured data[22] (80 percent of all the data that exists is unstructured)[23] and operating without the need for explicit, step-by-step instructions, deep learning systems iteratively generate solutions.[24] The outcome from many deep learning iterations is a digital neural network considered comparable to how humans think, which establishes patterns, relationships and connections within data that is otherwise unstructured data.

Now, “new machine learning approaches literally have the machines learn on their own things that we don’t know how to explain .” [25] The machines learn patterns, correlations and rules, sometimes the ones that humans use to accomplish the task but other times ones that humans cannot discern.[26] Indeed, many times the programmer cannot account for how the machine came to a particular result, even if the result is correct.[27] Tasks that were once impossible to automate are now on par with human experts, including not only facial recognition,[28] but also skin cancer detection[29] and some types of language translation.[30] IBM’s Watson, for example, analyzed questions and content comprehensively and quickly and eventually won “Jeopardy!” against former champions.[31] Reinforcement learning, a category of machine learning, entails experimentation.[32] Reinforcement learning is already prevalent in some forms of artificial intelligence. A computer developed by a subsidiary of Alphabet learned and mastered Go, a notoriously complicated board game, and eventually b eat one of the world’s best human players.[33] With reinforcement learning, “the neural network is reinforced for positive results, and punished for a negative result, forcing the neural network to learn over time.”[34]

Autonomous Weapon Systems and the U.S. Military

While to some it may appear to be a non-analogous leap to commingle the discussion of artificial...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT