The Legal Ethics of Generative Ai�part 3: a Robot May Not Injure a Lawyer Or, Through Inaction, Allow a Lawyer to Come to Harm

Publication year2023
Pages30
The Legal Ethics of Generative AI—Part 3: A robot may not injure a lawyer or, through inaction, allow a lawyer to come to harm.
Vol. 52, No. 8 [Page 30]
Colorado Lawyer
October, 2023

INTELLECTUAL PROPERTY LAW

BY COLIN E. MORIARTY

This is the third and final article in a series discussing the legal implications of generative AI. This installment examines ethical considerations for attorneys using generative AI.

The practice of law has marched in step with improvements in technology. The days of searching through a stack of Pacific Reporters in a library have been replaced with inputting queries into online databases. Instead of combing through paper or microfiche catalogs, lawyers can now rifle through recorded documents online. Undoubtedly, this has made practitioners more efficient. But it also creates a danger of losing track of the analog reality that still dictates how the law is published and argued. The organization behind legislation and opinions was developed on paper, and any lawyer who does not understand that system will miss opportunities or expose themselves to embarrassing and costly mistakes. Unlike fictional artificial intelligence, real-life generative AI is not necessarily focused on helping the lawyers who use it avoid harm.[1]

The rise of generative AI is the next major technological milestone in the practice of law, promising great advances in efficiency and training. Large language models (LLMs), a form of generative AI with a peculiarly humanlike capacity to interpret and produce human language, appear poised to have the most transformative impact on the practice of law.[2]Engineers have been trying to create so-called "legal expert systems" to automate the practice of law since the early computer era and have fallen short largely on the difficult problem of language comprehension. With LLMs, this problem may be solvable for the first time. Some lawyers are already using generative AI tools today to help them summarize or understand large documents or sets of documents (such as discovery), conduct legal research, brainstorm ideas, and assist with any number of other tasks.

Generative AI technology cannot be ignored. A lawyer has an ethical duty to understand and stay abreast of new technology relevant to the practice of law.[3] That means understanding not only how the technology can be used but also its risks. Generative AI presents grave dangers for the uninformed, hasty, or lazy.[4] This concluding piece of a three-part series explores how AI is being incorporated into legal practice and discusses the range of potential risks attorneys face when determining whether and how to use this rapidly evolving technology. Some of those dangers implicate a lawyer's ethical duties, such as the duties of competence, candor, supervision of nonlawyer assistants, confidentiality, and avoiding discrimination. Other risks are more subtle, such as the potentially corrosive effect on the development of legal reasoning skills and the training and professional development of new lawyers.

Generative AI Is Being Integrated Into the Legal Profession

Engineers have been trying to make robot lawyers for a long time. In the early computer era, some programmers tried to formalize the rules of law into logical statements and then added interpretative software that would allow a nonexpert to plug in the facts and receive the correct legal opinion. These so-called "legal expert systems" were envisioned as a way to replace lawyers with software.[5] The idea that the law could be reduced to logical statements has long had its critics.[6] Witnesses and evidence can be misleading or untruthful, relationships and reputation can make a difference in a lawyer's success, and judges are human beings—not algorithms or equations to be solved.[7] Even black letter law requires interpretation.[8] Coming up with formal rules to systematically encode the ambiguous, sometimes messy natural language of legislation into a formal logical system turned out to be extremely difficult.[9]

These legal expert systems largely did not deliver on their promise. A fundamental reason for their failure has been attributed to the prior software's inability to perform the "very mentally demanding task . . . which allows the lawyer to interpret legislation."[10] Even the most commonsense legal reasoning proved intractable.[11]

Generative AI may bridge this gap. LLMs have had shocking success in mimicking human understanding and production oflanguage. They have accomplished this not by being taught how to encode language directly, but by being fed enormous amounts of written language and being asked to synthesize a map or algorithm that successfully produces language matching what already existed.[12] Through machine learning, the model eventually developed internal logicas of yet fully undeciphered by programmers—that was effective at this task. The law is largely dominated by the written word as recorded in statutes, motions, briefs, orders, articles, and other sources. This vast corpus of written words is exactly the kind of thing needed to fine-tune an LLM, and the production of written words is exactly what an LLM does.[13]As with its use in other professions, generative AI could automate certain intelligent tasks that previously could only be done by a human being. In law, for example, these tasks might include proofreading, searching for applicable authority, drafting a memo summarizing the law or facts, or producing timelines or tables of contents for large documents or groups of documents. Rather than requiring a user to consider precise search terms likely to occur in the material being sought, it could allow contextual searches of authority or documents for a particular subject matter or issue. The software is also very good at helping brainstorm ideas to spur the human creative process, such as generating ideas for voir dire, presenting possible questions for depositions, or suggesting counterarguments to a draft brief.

Several LLMs are available to lawyers today, with more on the horizon. Lawyers can access ChatGPT, OpenAI's LLM that kickstarted the current AI boom.[14] It can be used either directly through ChatGPT's website or app, or by using a different software that queries ChatGPT using API calls.[15] Some practitioners are already using ChatGPT in their practice and are happy to extoll its benefits and offer tips and techniques for its use.[16] Vendors likewise use the ChatGPT API to promote their own applications that are marketed to lawyers. For example, Casetext, a legal research service, released an AI legal assistant named CoCounsel in March 2023.[17] CoCounsel allows users to explain fact patterns to get applicable law and an explanation of the same in response, summarize large groups of documents, organize questions for depositions, and complete other tasks. Lawgeex sells access to an LLM that promises to help with contract review by analyzing legal language "the same way a human lawyer would."[18] And DISCO offers a chatbot named Cecilia that promises to provide "evidence-based answer[s] with citations to documents" in an eDiscovery database.[19]

More applications are on the way. Logikcull, an eDiscovery vendor, is preparing to release a generative AI product that will integrate ChatGPT into its systems.[20] Logikcull promises that its software will be able to perform context-based searches, such as "find any potential violations" of a statute or "find where Jane Smith's statements show her public statements were false"—a useful enhancement of the current process of brainstorming keywords for a text-based search.[21] LexisNexis is also working on its own version of an AI legal assistant,[22] which may prove very useful given the company's extensive library of statutes, rules, cases, briefs, orders, and secondary sources.[23] In interviews with the author, LexisNexis had predicted that its AI tools would be available on its web-based research service around September 2023. In addition, LexisNexis is working with Microsoft to make its fine-tuned legal models available through other software, such as Microsoft's CoPilot AI. If all goes well, lawyers will, for example, be able to interact with LexisNexis' model from directly within a Word document.[24]

Lawyers are already using generative AI in their practices in some capacity and will continue to do so. According to a recent Thomson Reuters Institute survey of 440 lawyers, 82% of lawyers surveyed believe that generative AI will be applied to legal work, though only 51% think it should.[25] Another survey from LexisNexis found that half of lawyers surveyed had already used AI in their practice or are planning to do so.[26]

It may be natural to expect lawyers to adopt generative AI products. After all, in addition to the legal research tools and online court and public records systems mentioned above, lawyers already routinely use cloud-based file storage and case management systems, communicate by email, coordinate on electronic calendars, and use Google or other search engines to seek data online. In fact, most lawyers are likely already using AI, whether they know it or not, in the form of algorithms fueling their legal research systems.[27] In between typing "dog/sbite/p ("warning" or "sign" or "trespass" or "notice")" and the display of results of that inquiry by the legal research service of choice, there is a great deal of calculation going on to decide which cases to present and in what order.[28] The black box of generative AI may not be much different from the black box of search algorithms when it comes to the practice of law.

Candor and Supervision

In learning how to better mimic the slippery and ambiguous nature of language, and perhaps because of doing so, LLMs are more unpredictable than the other tools a lawyer may use. The largest risks to attorneys using generative AI may be overestimating the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT