Intellectual Property Law

Publication year2023
Pages30
INTELLECTUAL PROPERTY LAW
Vol. 52, No. 7 [Page 30]
Colorado Lawyer
September, 2023

FEATURE | INTELLECTUAL PROPERTY LAW

The Legality of Generative AI—

Part 2

I'm sorry, User. I'm afraid I can't do that.

BY COLIN E. MORIARTY

This is the second in a series of articles discussing the legal implications of generative AI. This installment discusses potential risks to end users of AI in the commerce context.

Just as pocket calculators lighten the mental weight associated with arithmetic, generative AI promises to do the same for writing, art, and other creative endeavors. Rather than spending decades developing an artistic style, individuals can spend an afternoon describing to a computer program the kind of art they want to see and then select the best results from among the hundreds of output images. Rather than wrestle with essay structure and topic sentences, a writer can generate a grammatically correct first draft in a few minutes of prompting. In addition to producing output for the unskilled, this capability may also speed up the output of skilled professionals. But this potential increased efficiency is not without consequences. This article addresses some of the apparent risks to the generative AI end user, with a focus on use in commerce.[1]

An Efficient but Unreliable Business Partner

Some studies suggest that the use of generative AI has already increased worker productivity by 14%.[2] Combined with the ability to work remotely, some workers report becoming so productive that they can work several jobs simultaneously.[3]The technology may be new, but businesses are already adopting generative AI, particularly large language models (LLMs), into their workflow.[4]

This potential utility comes with costs, of course.[5] Some of these costs are to society as a whole. How does society change when individuals need not invest the personal growth required to compose, write, or draw at a certain baseline level of competence?[6] Generative AI may harm those whose jobs are changed, replaced, or devalued, as is often the case with automation.[7] Other risks are specific to individual users who may wish to integrate generative AI to generate content, interact with third parties, or make decisions.

Of course, it is possible to intentionally use generative AI to advance bad goals. Cybercriminals use generative AI to create viruses and malware.[8] Scammers use AI to learn and replicate family member's voices to place phone calls asking for money.[9] Counterfeiters use AI to learn and copy the literary or artistic style of another.[10]Political actors use it to create fake videos for campaign purposes.[11] Generative AI is used to create fake data to stymie researchers.[12] These are serious problems keeping businesses and law firms on their guard. But while generative AI may make it easier to commit or harder to detect fraud, these kinds of intentional bad acts are not alien to the law.

Software that can misbehave unbidden is more novel. The first article in this series explained why, at a very fundamental level, it is not possible to perfectly predict the behavior of generative AI.[13] To recap briefly, generative AI models are not directly programmed with a series of instructions by human beings. Instead, the programmer provides a training framework and a large set of training data to a model, and then repeatedly tests how well the model performs compared to its training data. The programmer then adjusts the model to perform slightly better the next time until the model ends up generating a good internal map between prompts and the desired output. No one knows exactly what kind of internal algorithms the model ended up using. Moreover, the training process is not identical to the real world, and behavior that may have worked well in training may produce erroneous results in practice. Finally, though the software often appears omniscient when it comes to information in its training set, it is a far cry from omnipotent and will often produce vague and nonspecific output unless carefully prompted or managed to do otherwise.[14]

In its current state, therefore, the software behaves like an untrained, entry-level intern with full access to the Internet and great technical writing skills but without experience, context, or particular loyalty to your company. If a business would not entrust a task to such an individual, it probably should not entrust the task to AI.[15]

User Concerns Regarding Intellectual Property

A natural role for generative AI in a business context is to generate content.[16] Automating the human creative process brings with it several new risks, however. Without more involvement from a human creator, the work product of AI is probably not protected by either copyright or patent law, leaving a business unable to protect what it created. And, as a golem without loyalty or context, AI could create legal problems if it infringes on prior works without the user's intention or knowledge.

Difficulty in Protecting Intellectual Property in Works Generated by AI

While users may be prone to anthropomorphize, the law is not. Works wholly created by generative AI are likely not protected by intellectual property law because they are not human. Sometimes, non-human entities do have rights under the law. Corporations, governments, boats, and others can own property and exercise rights.[17]Colorado law, for example, expressly conveys rights on corporations,[18] and many state statutes expressly include entities in the definition of "person."[19] Where non-humans have rights, however, it is typically the result of an express exception to the normal assumption that laws apply only to natural persons. Courts have sometimes expanded on the rights afforded to entities,[20] but premised on the entity as a vehicle for human constitutional rights.[21]

The judicial presumption appears to be that laws are intended to apply to human beings except as otherwise stated. So, for example, "the world's cetaceans" (whales, porpoises, dolphins, etc.) do not have standing to bring claims under the Endangered Species Act or similar laws because, though Congress could have chosen to authorize suits by animals, it did not do so.[22] Absent a law to the contrary, "[a]nimals are simply not capable of suing or being sued "[23]

Thus, even though the Copyright Act does not expressly state that an author must be human for a work to qualify for copyright protection, courts have held that this is so.[24] In Naruto v. Slater, a crested macaque discovered a wildlife photographer's camera and took several photographs of itself.[25] The photographer published the photos and was sued by PETA, on behalf of the monkey, for violating the monkey's alleged copyright.[26] The Ninth Circuit noted that the Copyright Act "does not expressly authorize animals to file copyright infringement suits" and explained that the Act's use of human family terms such as "children . . . legitimate or not, . . . widow, and widower, all imply humanity and necessarily exclude animals that do not marry and do not have heirs . . . ."[27] In Urantia Foundation v. Maaherra, the Ninth Circuit also refused to acknowledge copyright rights for a book allegedly "authored by celestial beings" and instead based its analysis on the humans who arranged it and wrote it down.[28]

The US Copyright Office interprets the term "author" in the Copyright Act to "exclude non-humans," including generative AI.[29]It requires that any work be the product of human authorship to be eligible for copyright protection.[30] With respect to generative AI, the Copyright Office weighs the specific facts of the creation and "will consider whether the AI contributions are the result of mechanical reproduction or instead . . . an author's own original mental conception . . . ."[31] The question appears to be whether the generative AI is responsible for the creative work or is merely being used as a tool by a creative human.[32] At one extreme, the office refused to grant a copyright in an image that was entirely "autonomously created by a computer algorithm," according to its author.[33] In a more recent case, the office took a more nuanced approach with a comic book written and arranged by a human, but where the art was entirely the creation of generative AI.[34] In this case, the office decided that the art could not be copyrighted, but the other creative elements could be since they were the product of a human.[35]

Patents, too, must be the invention of a natural person to warrant protection.[36] In Thaler v. Vidal, an individual claimed to have developed AI systems that generate patentable inventions and attempted to patent two outputs of his AI.[37] Despite prompts from the US Patent and Trademark Office to identify someone as the inventor, he insisted that the AI was the inventor.[38]His patent was denied because "a machine does not qualify as an inventor," and the Federal Circuit affirmed.[39] The court reasoned that the use of the word "individual" in the Patent Act ordinarily meant a human being and, absent an indication that Congress intended a different result, the meaning was plain.[40]

While works created wholly by AI are therefore unlikely to be protected, there is probably some level of human involvement that can likely result in copyrightable or patentable work. Exactly how much human involvement is needed is an open question, but the answer probably lies in how much and what kind of creative work is performed by humans after receiving a result from the software. The Copyright Office suggests that feeding a text prompt into the software is not enough.[41] It explains that "prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output."[42] Whether instructing a human or AI software...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT