Do Androids Defame with Actual Malice? Libel in the World of Automated Journalism.

AuthorAlbright, Dallin

TABLE OF CONTENTS I. INTRODUCTION 104 II. BACKGROUND 105 A. Algorithmic Speech 106 1. Curated Production 106 2. Semi-Autonomous Production 107 3. Fully Autonomous Production 109 4. Artificial Intelligence and Misinformation 109 B. Libel and Defamation 110 1. The Negligence Standard 111 2. The Actual Malice Standard 112 3. Who Can Be Liable? 114 III. ANALYSIS 115 A. Applying the Negligence Standard 116 B. Libel Defendants in cases involving Artificial Intelligence 117 C. Concerns with the Negligence Standard 119 1. Freedom of Speech 120 2. Channels of Effective Communication 122 IV. CONCLUSION 123 I. INTRODUCTION

Automation has been a disruptive influence for many professions, and now even journalists are facing the effects. Automated journalism is the use of artificial intelligence (AI), or algorithmic computer programs, to produce news articles. (1) It has been used effectively by news outlets such as The Washington Post, The Associated Press, and The New York Times in sports scores, financial news, and reporting the weather. (2) In September 2020, The Guardian published a long-form article produced by OpenAI's GPT-3 language generator, demonstrating the potential of automated journalism. (3) Microsoft announced in 2020 that it would not renew contracts with roughly fifty of its news production contractors and that it planned to use AI to replace them. (4) In the next several years, AI is expected to transform the news industry, presenting novel legal challenges to those practicing communications law. (5)

Automated journalism creates a unique risk to news publishers with respect to the possible production of defamatory or libelous statements. (6) Courts in the past have created standards dependent on an author-defendant's malice or their understanding that a defamatory statement is false or hurtful. (7) However, traditional methods cannot show that an algorithm possessed malice or that a machine produced a statement knowing it was false or hurtful. (8) And yet, AI-generated defamation is still harmful to the individuals about whom it is written and to the general public consuming the false information. (9) Some argue that statements produced by an algorithm are owed the same protections afforded to the statements made by living individuals. (10) Others believe that as non-human actors, algorithms do not warrant the same level of protection as human speakers. (11)

This Note argues that the actual malice standard for defamation should not apply to statements produced by AI, even when the statements discuss public officials or public figures. Rather, defamation claims for AI-generated statements should be evaluated under the more appropriate negligence standard, which is usually applied to statements about private individuals. Under the negligence standard, defendants would have a reasonable duty of care to follow journalistic practices and attempt to ascertain the truthfulness of statements generated by AI. This is more appropriate than the actual malice standard, which requires only that a defendant did not have serious doubts about a statement's truthfulness and was not recklessly indifferent in publishing them.

This Note will first review the nature and development of algorithmic speech before analyzing how the negligence standard could be applied to cases involving AI. The Background section will review how algorithms create statements through mechanical patterns with various degrees of human input, and how this process can sometimes lead to unpredictable results. This section will also review the elements of libel law, demonstrating the unique protection given to defendants who make statements about public officials and public figures on account of a constitutional concern for freedom of speech. The Analysis section will then examine the reasoning behind imposing a stricter duty upon defendants that use AI on account of its unique power to spread disinformation if left unchecked. Then, this Note will address concerns that free speech advocates may have against removing the actual malice requirement by analyzing the difference between algorithmic speakers and human speakers. AI poses a unique challenge to legal and journalistic institutions, and only by adapting quickly can courts keep up with rapidly developing technology.

  1. BACKGROUND

    To understand the reasons for removing the actual malice requirement for libel when speech is produced by AI, it is necessary to understand the basic nature of artificial intelligence and the legal framework surrounding defamation. Autonomous journalism currently requires significant human input, but as the technology becomes more sophisticated, it will require less and less independent human judgment to create and share statements. (12) This can lead to false, inappropriate, or misleading statements being shared with the public if not properly reviewed or controlled. (13) The elements of libel against public figures require that, in addition to the statement being untrue, a defamatory statement is shared with actual malice or reckless disregard for the truth. (14) This could create a difficult barrier for those damaged by autonomously-generated libel to overcome because algorithms cannot be shown to possess actual malice or reckless disregard for the truth in the same way human authors can possess.

    1. Algorithmic Speech

      Statements produced by AI are commonly called "algorithmic speech," and can be classified in several broad categories based on the level of user input required to produce statements. (15) This Note will adopt the categories of Curated Production, Semi-Autonomous Production, and Fully Autonomous Production. (16) Before addressing legal challenges presented by speech produced by AI, it is essential for this Note to define and describe these categories of speech.

      1. Curated Production

        Curated production is a form of algorithmic speech where computer programs are fed data internally by users to produce text. (17) This level of AI possesses less freedom to generate unexpected statements and the greatest amount of user control. (18) Programs like these are fed information to produce text that is formulaic and predictable. (19)

        Most current autonomously-generated news stories would be categorized as Curated Production. (20) News companies feed a program data from sports matches, weather forecasts, or the financial markets, and the program produces simple stories that resemble those written by a human. (21) Since these news stories are mostly "by-the-numbers" with little to no commentary or analysis, they are ideal for autonomous journalism, and many news publishers have adopted the technology specifically to cover these fields. (22)

      2. Semi-Autonomous Production

        When algorithms are designed to respond to data from external sources, they qualify as Semi-Autonomous. (23) These programs behave with a greater degree of freedom to produce statements that are not immediately intended by the programmer. (24) This can result in text that appears more natural and "human," which can be a desirable trait when interacting with external information. (25) This level of sophistication could also require less internal input and oversight, saving an operator's time and resources. (26)

        One (in)famous example of Semi-Autonomous Production is Microsoft's AI chatbot, "Tay," for which Microsoft created an account on Twitter in 2016. (27) The program was designed to learn from external sources by interacting with other users on the platform, allowing it to appear more human. (28) Unfortunately, within a day of its debut, Tay's Twitter account began posting inflammatory and inappropriate statements based upon its interactions with other Twitter users. (29) The chatbot was quickly taken down by an embarrassed Microsoft, but the episode provides a significant warning about the dangers of allowing AI to generate and publish statements without oversight. (30)

        A more familiar, everyday example of Semi-Autonomous Production is the autocomplete function available in search engines and word processors. (31) These functions are designed to respond to external user input and predict the next several words a user would like to type. (32) Like Tay, these programs take user input and extrapolate new statements to varying results: sometimes the statements produced by autocomplete are acceptable, and other times they can be problematic. (33)

        There are few examples of Semi-Autonomous news stories which have made it to print. Two articles--one published in The Guardian in 2020 and one in The New York Times in 2021--were written using artificial intelligence to talk about artificial intelligence. (34) However, both of these articles required a good deal of editorial control over the algorithm in order to generate text that was suitable to print. (35) One editor noted that generating the article required producing eight different iterations and splicing them together, (36) while another pointed out that the algorithm took several tries because it kept getting stuck in an iterative loop. (37) If the goal of autonomous journalism is to require less user input while still generating seemingly natural statements, Semi-Autonomous Production may still have a long way to go.

        So far, the question of liability for Semi-Autonomous Production has been averted through the application of Section 230 of the Communications Decency Act. (38) This Section provides in part that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (39) In other words, websites and content platforms cannot be held liable for information shared by third-party users. This is important because Semi-Autonomous Production is used most frequently by search engines and social media platforms. (40) These parties can argue that algorithmic statements occur because of third-party posts or links, meaning they cannot be held...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT