INCITEMENT AND SOCIAL MEDIA-ALGORITHMIC SPEECH: REDEFINING BRANDENBURG FOR A DIFFERENT KIND OF SPEECH.

AuthorRhoads, Anna

TABLE OF CONTENTS INTRODUCTION 526 I. HISTORY AND BACKGROUND 531 A. Free Speech Justifications and Values 531 B. Brandenburg Standard for Incitement 534 II. THE PROBLEM 537 A. Social Media-Algorithmic Speech Is Uniquely Likely to Produce Lawless Action 537 B. Brandenburg Is Ill-Suited to Social Media-Algorithmic Speech 541 1. The Brandenburg Standard Applied to Social Media-Algorithmic Speech 542 2. First Amendment Theoretical Justifications Applied to Social Media-Algorithmic Speech 544 III. THE SOLUTION 548 A. A New Incitement Standard for Social Media-Algorithmic Speech 548 B. Counterarguments Regarding Social-Media-Algorithmic Speech, Provision of Information, and [section] 230 551 1. Social Media-Algorithmic Speech Is More Than Provision of Information 551 2. Regulating Social Media-Algorithmic Speech Would Not Run Afoul of [section] 230 554 CONCLUSION 555 INTRODUCTION

As social media use has proliferated, (1) social media algorithms have become integral to our lives. (2) Social media companies design algorithms to increase user engagement, which increases advertisement exposure and, therefore, profit. (3)

How do social media algorithms increase engagement? Algorithms try to fill each user's feed with content of interest to the user. (4) To put "interesting and relatable" content on a user's page, the algorithm analyzes data generated by the user's interactions online. (5) The algorithm interprets these interactions as indicators of interest, and as such, it analyzes things such as what content the user likes or shares, time spent with a given page or profile onscreen, profiles and pages searched, whom the user directs messages, and whom the user knows in real life, (6) as well as location data and the user's friends' interests. (7) Based on this information, the algorithm creates a pool of content that it predicts might interest the user. (8) Then, the algorithm uses certain factors to rank how appealing each piece of content will be to the user. (9) Finally, the algorithm pushes the content that it predicts will most interest the user to the top of the user's feed. (10) Putting such content at the top of a user's feed "is expected to increase the chance a user will engage with the [content]." (11)

By predicting the kinds of content a user might like and placing only those kinds of content into a user's feed, social media algorithms create "filter bubbles." (12) A social media algorithm creates a filter bubble when it places only the same or similar kinds of information and content into a user's feed, creating an echo chamber. (13) These echo chambers are not innocuous. (14) Filter bubbles skew the information that a user receives such that the user learns only one side of a given story. (15) These bubbles persist across social media platforms both because algorithms share user information across platforms (16) and because some companies own multiple platforms. (1)' Given the echo chambers these bubbles create and the persistence of information bubbles across platforms, algorithms contribute to modern political polarization. (18) Furthermore, algorithms actively encourage this polarization. (19)

Moreover, social media companies design algorithms to do more than simply provide information: they intentionally design these algorithms to persuade. (20) To turn a profit for themselves and those advertising on their sites, social media companies design algorithms with "the underlying motive of modifying a certain attitude or behavior, exploiting psychological and sociological theories, such as persuasion and social influence." (21) As such, social media algorithms can have real-world impacts on people's beliefs and actions. (22) This is where the problem arises.

As algorithms designed to modify beliefs and behavior funnel people into filter bubbles, social media sites become breeding grounds for violence. (23) Within these information bubbles, users begin to inaccurately believe that other people support violence. (24) Furthermore, being inundated with a specific idea across platforms leads users to believe that the idea is true or give the idea more credence. (25) As such, when users repeatedly see content with violent messages, users begin to believe that many others support these violent, extreme ideas even though this belief is not aligned with reality. (26) When users believe that violent ideas are supported truths, users become more likely to engage in violence themselves. (27) Therefore, calls for violence and circulation of information about violent ideas on social media platforms present real risks and correlate to real-world violent outcomes. (28) Moreover, recent whistleblower testimony demonstrates that social media companies are aware that their algorithms amplify "dangerous speech that has led to violence and death," but companies have ignored or buried these findings, prioritizing engagement and profit over the very real risk of violence. (29)

This is an incitement problem. Speech causes tangible harm when it incites violence. (30) Inciting speech falls outside the First Amendment's protection and can, therefore, be punished civilly and/or criminally. (31) Currently, courts use the Brandenburg standard to determine whether speech qualifies as unprotected incitement. (32) To qualify as incitement, the Brandenburg standard requires that speech is "directed to inciting or producing imminent lawless action and is likely to incite or produce such action." (33) Three basic elements comprise this standard: (1) intent, (2) imminence, and (3) likelihood. (34)

Scholars have argued that algorithm-based decisions, like the ones that social media algorithms make about what content to put into people's feeds, qualify as speech. (35) Social media algorithms' decisions have a message of their own, beyond the message of any individual piece of content: the message of the filter bubble itself. (36) While a post might say "I hate lawyers," a social media algorithm that sends this post to a user will send countless similar pieces of content, culminating in a message from the algorithm itself that "lawyers are bad." As such, these scholars argue that the First Amendment applies or should apply to these algorithmic decisions. (37)

Assuming that these scholars are correct and that social media algorithms' decisions qualify as speech to which the First Amendment applies (social media-algorithmic speech),' (8) this Note proposes a legal solution to the increasing problem of violence stemming from social media. This Note asserts that the incitement standard for social media-algorithmic speech should be less stringent because the Brandenburg standard does not apply well to new media, social media-algorithmic speech is much more likely than other speech to actually produce lawless action, and the traditional First Amendment justifications do not apply to social media algorithms' speech. Therefore, the Supreme Court should tweak the incitement standard for social media-algorithmic speech by altering Brandenburg's intent and imminence requirements.

Part I of this Note provides relevant history and background about the rationales behind and values of free speech and the current incitement standard. Part II presents the problem at hand, which is that social media-algorithmic speech is uniquely likely to produce lawless action while the Brandenburg standard does not and cannot address this problem sufficiently. Part III discusses a solution to this problem, arguing that the Court should modify the Brandenburg standard as applied to social media-algorithmic speech by altering the intent requirement and relaxing or removing the imminence requirement. Part III also addresses potential counterarguments.

  1. HISTORY AND BACKGROUND

    This conversation should start at the very beginning. Why is there a right to free speech enshrined in the First Amendment? What is the current standard for incitement and why? How are justifications for free speech and the Brandenburg standard interrelated? This Part discusses these preliminary questions to lay the groundwork for later discussion about how free speech justifications and the Brandenburg standard interact with and fail to address the problems of social media-algorithmic incitement and solutions to this issue.

    1. Free Speech Justifications and Values

      Free speech is somewhat unique in Supreme Court constitutional jurisprudence because in dealing with free speech, the Court generally does not look to history or the Framers' intent for interpretive guidance. (39) The Court does not look to history in part because it did not begin building a body of free speech jurisprudence until the early twentieth century. (40) Furthermore, the Court tends not to look to history or the Framers' intent because free speech history is sparse and conflicting and the Framers' intent is unclear. (41)

      Initially, the free speech right appeared to be a response to England's licensing and seditious libel laws. (42) England required that people submit publications to royal officials for licensing before printing. (43) These licensing laws were a form of prior restraint on speech. (44) England also prosecuted seditious libel, which is intentional criticism of the government, government officials, or laws. (45) Against this backdrop, one might assume that the Framers enshrined the free speech right to eliminate prior restraints and seditious libel laws. (46) However, certain Framers passed the Alien and Sedition Acts of 1798, which prohibited seditious libel in the United States much like England's laws did, muddying the waters of the Framers' intent in establishing freedom of speech. (4)' Aside from this conflicted history, there is "little indication of what the [F]ramers intended." (48)

      Thus, the Court has instead turned to philosophical justifications for free speech as a fundamental right. (49) These justifications fall into a few categories: the search for truth, self-government, autonomy, and negative...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT