Defamatory Political Deepfakes and the First Amendment.

AuthorIce, Jessica

CONTENTS INTRODUCTION I. WHAT ARE DEEPFAKES? A. The Technology Behind Deepfakes B. How Deepfakes Got Their Name C. Defining Deepfakes II. DEEPFAKES OF POLITICAL FIGURES III. DEFAMATION LAW AS A REMEDY IV. SATIRE OR PARODY AS A DEFENSE V. INJUNCTIONS AS A REMEDY A. Injunctions and the First Amendment B. Constitutionally Permissible Injunctions on Expression 1. Obscenity 2. Copyright C. Injunctions on Defamatory Speech VI. INJUNCTIONS ON DEFAMATORY DEEPFAKES VII. APPROPRIATE SCOPE FOR INJUNCTIONS AGAINST POLITICAL DEFAMATORY DEEPFAKES VIII. INJUNCTIONS AND THIRD-PARTY PROVIDERS CONCLUSION INTRODUCTION

After logging onto Facebook, you see that your friend has posted a video on your wall titled: "Donald Trump Pee Tape Released." Surprised, you click on the link and are directed to a YouTube video of several young, scantily clad Russian women bouncing on a bed in what appears to be the infamous Presidential Suite at the Ritz Carlton in Moscow. (1) Donald Trump walks into the room and heads towards the women. You hear Trump's voice say, "let's have some fun, ladies." The girls laugh and you hear Trump again: "How about we show Obama what we think of him. Why don't you pee on his bed?" The tape cuts off in an instant and you feel shocked. Could this video actually be real?

The video in the hypothetical above could easily be what is described colloquially as a "deepfake." The definition of a deepfake is "still in flux, as technology develops." (2) However, a deepfake is generally understood to be a video made with the use of machine-learning to swap one real person's face onto another real person's face. This ability essentially makes it possible to ascribe the conduct of one individual who has been previously videotaped to a different individual. (3) That is, a deepfake is a digital impersonation of someone. This impersonation occurs without the consent of either the person in the original video or the person whose face is superimposed on the original. Individuals in the public eye have already been a major target for deepfakes. (4) Political figures likely will be the targets of future deepfakes, especially by those with an interest in spreading discord and undermining public trust. (5)

Deepfakes of political figures pose serious challenges for our political system and even national security, but legal remedies for these videos are complicated. Every legal remedy to combat the negative effects of political deepfakes must go through a careful balancing test. On one side you have a special interest in protecting high-value political speech and furthering public discourse under the First Amendment. On the other side you have the potential for severe public harm by undermining elections and eroding trust in public officials. Although some political deepfakes might be satirical and promote public discourse about the merits of an individual candidate or issue, (6) many deepfakes will likely cross over the line into pure defamation.

Given this careful balancing test, what remedies are available for defamatory political deepfakes that survive First Amendment scrutiny? If deepfakes are found to be truly defamatory, such speech would only receive limited First Amendment protection and victims could recoup monetary damages from successful defamation lawsuits. (7) For most political figures, however, damages will be difficult, if not impossible, to determine, and any monetary damages will come too late to truly remedy the reputational harm inflicted during a campaign or their tenure as a public figure. Thus, injunctions are likely a quicker and more effective remedy for defamatory political deepfakes.

Although injunctions against deepfakes may seem like a logical remedy, they will likely face major First Amendment hurdles. The Supreme Court has yet to provide a definitive answer on whether injunctions against defamatory speech are permissible under the First Amendment. (8) Some lower courts have found injunctions to be impermissible because they are not sufficiently tailored, effectively creating a prior restraint on constitutionally protected speech. (9) Other courts have suggested that narrowly crafted injunctions against defamatory speech may be permissible. (10) Even if an injunction against a defamatory political deepfake survives a First Amendment challenge, victims might still be unable to remove that deepfake if its creator is unreachable by United States courts. (11)

This Note argues that narrowly crafted injunctions against defamatory political deepfakes should be permitted under the First Amendment. First, this Note gives an overview of deepfakes and the technology used to propagate them. Then, it addresses the potential defamatory and non-defamatory uses of political deepfakes and how defamatory political deepfakes would be analyzed under the First Amendment's heightened scrutiny standard that is used to analyze political speech. Third, it gives an overview of injunctions on speech under First Amendment jurisprudence and provides some examples of permissible injunctions on expression. Fourth, borrowing from obscenity and copyright law, this Note discusses injunctions as a remedy for defamatory political deepfakes and whether such injunctions should be considered impermissible prior restraints on speech. Finally, it addresses the issue of unreachable defendants and provides a potential solution by extending to deepfakes the requirements for copyrighted materials under the Digital Millennium Copyright Act. (12)

  1. WHAT ARE DEEPFAKES?

    The notion of what a deepfake is might seem intuitive at first glance--deepfakes seem to be a simple fake or face-swapping video. But such a simple definition grossly oversimplifies the technology behind deepfakes and lacks the specificity needed to properly address deepfakes from a legal perspective.

    1. The Technology Behind Deepfakes

      Traditionally, any individual who wanted to edit a photo or video would have to upload the photo or video into a computer program and manually make any desired edits. Computer programs have gradually made the editing process easier, but a complete manual overhaul of a video with realistic final results is still very time- and resource-intensive. (13) For instance, when the creators of Rogue One: A Star Wars Story decided to bring back the character of Grand Moff Tarkin through a digital recreation, the Rogue One visual-effects supervisor described the digital recreation process as "extremely labor-intensive and expensive." (14)

      Deepfakes, however, do not require human labor to manually manipulate videos; instead, a computer's processing power does all the work. (15) The technology that makes deepfakes possible stems from "a branch of Machine Learning that focuses on deep neural networks" called "deep learning." (16) Deep learning loosely imitates the way the brain works by processing information through a series of nodes (similar to neurons) in an artificial neural network. For a neural network to replicate an image, it must take in a multitude of information from a particular source (often called an "input layer") and then run that information through various nodes until it produces an "output layer." (17) Neural networks are "trained" by adjusting the weights at each node to try to improve the final "output layer" to be as close as possible to the desired result. (18)

      Deepfakes add an extra layer of complexity onto this process because they ultimately have two input sources: (1) the face in the original scenario video ("original face"), and (2) the face swapped into the original scenario video ("swapped face"). To facilitate this process, a computer must generate two separate neural networks for each image, each that has enough in common with the other to be able to swap images on a shared facial structure. (19) A basic way to achieve this result is through an autoencoder. (20) An autoencoder is a "neural network that is trained to attempt to copy its input to its output." (21) In order for the face swapping to be successful, the computer must construct two separate neural networks, one for the original face and another for the swapped face, and both must be trained separately. (22) Once the individual networks have been built with enough accuracy through training, then a portion of the networks called the decoders can be swapped, effectively pasting the swapped face onto the network of the original face. Using such technology allows the swapped face to mimic any expressions originally made by the original face in the original video. (23)

      (24) Face generation can be made even more realistic through the use of a Generative Adversarial Network ("GAN"). (25) Similar to autoencoding, GANs attempt to recreate images using deep-learning techniques. (26) GANs achieve this result by using two components: (1) a generator, which creates natural looking images, and (2) a discriminator, which decides whether the images are real or fake. (27) Essentially the "generator tries to fool the discriminator by generating real images as far as possible." (28) Through this adversarial process between the generator and discriminator, the network is able to produce more consistently realistic images than it could through a traditional autoencoding structure.

      Most recently, mathematicians and computer scientists have attempted to combine GANs with a specialized type of autoencoding called "variational autoencoding" (collectively, "VAE-GANs") to produce the most realistic output layers or generated images to date. (29) VAE-GANs work by using the autoencoding process to provide an image to the GAN's generator. (30) The GAN's discriminator then checks the image through an iterative process to make it seem more realistic. (31) In addition to the advancements in deepfake videos, researchers have also made strides in improving fake audio through GANs (32) and other techniques. (33) These advancements are only the beginning of computer-image and audio regeneration...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT