The First Amendment Implications of Regulating Political Deepfakes.

AuthorBodi, Matthew
  1. INTRODUCTION

    The phrase "seeing is believing" may soon be an anachronism. Videos and photographs have been edited and manipulated for practically as long as they have existed. (1) However, it will be increasingly difficult to tell whether or not a video or an audio recording is authentic. Deepfake is a recent technological development used to create videos or audio clips purporting to feature an individual doing something that they never did and/or saying something they never said. (2) Deepfake content is created in such a sophisticated way that to the naked eye it may look completely real. Most concerning, is that even though experts are employing "deepfake detectors," which are arguably as sophisticated as the technology they are designed to detect, the detectors are unable to determine with one-hundred percent accuracy whether a video is authentic or not. (4) This Note examines how the government has responded to the real and perceived threats posed by deepfake technology, at both the federal and state level, and how they are targeting malicious uses of deepfake.

    The purpose of this Note is to consider in particular the First Amendment implications of laws addressing political deepfakes recently passed by Texas and California. Lawmakers are correct to want to protect the American people from misinformation, which is maliciously deceptive, especially if they are used to interfere with elections. But any future law aimed at protecting the electorate must be carefully considered and drafted to ensure it does not lead to an unconstitutional censorship of political speech.

    Part II provides a background on the technology, the key takeaway being the rapid development of this technology, as well as its ubiquity and undetectability as is predicted by leading experts. Part III examines proposed federal legislation and then describes recently enacted state laws. Part IV considers the constitutional implications of California and Texas' laws for the restrictions they place on political deepfakes. Furthermore, this section contemplates the constitutional tension between the First Amendment's protection of political speech and the potential lack of protection afforded to purposefully false statements. The Conclusion suggests that Texas' law may be found unconstitutional while California's law will likely not be found unconstitutional. This Note points out the need for legislation that addresses this unprecedented technology and highlights the constitutional principles that protect American citizen's political speech. Any prospective law must be carefully written so as not to fail the strict scrutiny inquiry to which it will likely be subjected.

  2. A PRIMER ON DEEPFAKE TECHNOLOGY

    1. What is Deep fake and What Does the Future Hold?

      Deepfake is much more than a simple photo-shop software program. Deepfake uses artificial intelligence and machine learning to create edited content. It utilizes Generative Adversarial Networks (GANs) to generate the most realistic image possible. (6) GANs are a type of "deep neural net architectures comprised of two [neural networks], pitting one against the other (thus the 'adversarial')." One of the neural networks will generate a synthetic image, then pass it to the other neural network which evaluates the authenticity of that image. (8) The generating neural network will continue to generate images, optimizing itself as it does, until the evaluating neural network deems its generated image is authentic. (9)

      They "challenge each other with increasingly realistic fakes, both optimizing their strategies until their generated data is indistinguishable from the real data." (10) The generating network trains on a data set before attempting to create its image and the larger the data set on which it trains, the easier it is to create a believable deepfake. (11) It is therefore easier to create more believable deepfakes of public figures and celebrities because their likeness and voice has been captured extensively in video footage and photographs. (12)

      These innovations are fairly recent. GANs were first introduced in 2014 by researchers from the University of Montreal. (13) More recently, GANs has been used to create deepfake content. The first deepfake videos were only first reported around December of 2017. (14)

      The term deepfake comes from the reporting in December 2017. "Deepfakes" was a Reddit user who attracted attention for using the technology to superimpose celebrities' faces onto pornographic video clips. (15) Their use of Deepfakes as a username being "simply a portmanteau of 'deep learning' (the particular flavor of AI used for the task) and 'fakes[.]'" (16) Since then, the term "deepfake" has been used to describe a variety of other methods of "audiovisual manipulation" utilizing deep learning and GANs to create videos and audio recordings. (17)

      The concerns this technology raises have less to do with the fact that such high-quality fake videos can be created, but more to do with the rapidly increasing accessibility of the technology. The term "deepfakes" took on a life of its own not because this Reddit user's videos appeared so realistic. Rather, the significance and concern regarding the content is for two additional reasons. First, because the videos were created by a single person on "open-source machine learning tools like TensorFlow, which Google makes freely available." Second, because these high-quality faked videos could be achieved in a few hours with only a "consumer-grade graphics card." (18)

      While access to a variety of deepfake systems is increasing, those systems are simultaneously becoming more user friendly. This brings the prospect of widespread and ubiquitous deepfake creation closer to reality. One extremely popular and inoffensive use of this technology is "Zao." Zao is a Chinese face-swap smart phone app which utilizes deepfake technology to swap user's faces "into popular scenes from hundreds of movies or TV shows." (19) However, its popularity has waned after concerns regarding the storage of user data came to consumers' attention. (20)

      Another face-swap deepfake system which offers users more creative freedom is FSGAN. This system allows a user to take a video of a person and then superimpose another person's face onto that person, requiring no more than a photograph of the person to be super-imposed. (21) Although this system is of lower quality than those using a larger data set, FSGAN makes deepfakes more easily created and accessible. (22) The rapid development and spread of this technology since 2017 indicates that there will be further speedy development and wider access in the near future.

      As mentioned above, deepfake audio as well as video may be created. Thus, with the use of more sophisticated software a creator can fabricate not only a video of someone doing something, but also create dialogue for this fabricated scene as well. (23) Currently, this technology still requires extensive input data, it can only be placed onto a non-moving target, and cannot alter the inflection of the target's voice. Although, as this technology develops it will inevitably make improvements in these areas."

      A "deepfake pioneer" and associate professor at University of Southern California (USC), Dr. Hao Li, (25) has recently predicted in an interview on CNBC that deepfakes that "appear 'perfectly real' will be accessible to everyday people in 'half-a-year to a year[.]'" (26) This is a shorter time frame then his initial prediction that it would take two to three years. (27) Regardless of whether it is a year or three years away, the idea of widely accessible, perfectly real deepfakes is looming on the horizon.

      This prospect has led to experts racing to create artificial intelligence driven tools to detect deepfakes, an attempt to fight fire with fire. The University of Albany, Purdue University, UC Riverside, and UC Santa Barbara all have programs developing these "Media Forensic" tools. As a testament to the widespread concern over this technology, these programs are supported by the US Defense Advanced Research Projects Agency (DARPA). (28) However, concerns exist that these tools will never be enough and "may simply signal the beginning of an Al-powered arms race between video forgers and digital sleuths. A key problem... is that machinelearning systems can be trained to outmaneuver forensics tools." (29 ) These measures have been characterized by some as merely a stopgap measure. Dr. Hao Li has helped to develop one of these detectors and has nonetheless concluded, "it won't be long until the work is useless... deepfake technology is developing with a virus / anti-virus dynamic." (30)

    2. Current and Potential Future Uses of Deep fakes

      Between 2018 and 2019 the number of deepfake videos on the internet doubled. (31) This fact is troubling when you consider the way this video creation technology has been employed. It takes little imagination to see a bad actor using this technology to fictionalize an individual engaging in an affair, an act of violence, or making an offensive remark or gesture that could cost an individual their marriage, family, job, reputation, or open them up to extortion. Fears rightly exist about the potential use in the context of national security and political deceptions.

      As of September 2019, the overwhelming majority of deepfake videos on the internet, up to ninety-six percent, are pornographic. (32 ) DeepNude is an extremely troubling example of this technology which completely disregards a target's sense of personal privacy. The licensed version sold for fifty dollars and allowed a user to "strip" the clothes off of a photo, superimposing naked female anatomy onto the targeted individual, generating an image of that individual appearing to be naked. (33)

      After an article was published criticizing DeepNude's product the site was overwhelmed by so many download requests from interested purchasers, it shut down. (34) The...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT