EXTREMIST SPEECH, COMPELLED CONFORMITY, AND CENSORSHIP CREEP.

Author:Citron, Danielle Keats
 
FREE EXCERPT

INTRODUCTION

In 2008, U.S. Senator Joseph Lieberman squared off with internet companies and lost. The dispute concerned the Senator's demand that platforms remove hundreds of Al-Qaeda training videos. (1) Senator Lieberman argued that by keeping up the videos, tech companies were complicit in terrorist recruitment. (2)

Google's YouTube held fast in defense of users' right to express unpopular viewpoints. (3) As Jeffrey Rosen wrote at the time, Google's Nicole Wong and her colleagues worked "impressively to put the company's long-term commitment to free expression above its short-term financial interests." (4) Ignoring the Senator's demands was a safe strategy: any effort to proscribe extremist expression would likely fail given the First Amendment's hostility to viewpoint-based regulations. (5)

American free speech values guided policy decisions in Silicon Valley long after the showdown with Senator Lieberman. (6) Social media companies routinely looked to First Amendment doctrine in crafting speech policies. (7) Twitter, an exemplar of this ethos,'was aptly known as "the free speech wing of the free speech party." (8)

From the start, tech companies' commitment to free expression admitted some exceptions. (9) Terms of service and community guidelines banned child pornography, spam, phishing, fraud, impersonation, and copyright violations. (10) Threats, cyber stalking, nonconsensual pornography, and hate speech were prohibited after extended discussions with advocacy groups. (11) The goal was to strike an appropriate balance between free expression and abuse prevention while preserving platforms' market share. (12)

More recently, social media companies have revised their speech policies concerning extremist and hateful expression. Unlike previous changes, however, these revisions were not the result of market forces. They were not made to accommodate the wishes of advertisers and advocates. (13) Instead, they were adopted to stave off threatened European regulation. After terrorist attacks in Paris and Brussels in late 2015, European regulators excoriated tech companies for failing to combat terrorist recruitment on their platforms. (14) Their message was clear: online platforms would face onerous civil and criminal penalties unless their policies and processes resulted in the rapid removal of extremist speech. (15)

Tech companies accommodated these demands because regulation of extremist speech was a real possibility. Unlike in the United States, in the European Union, there isn't a heavy presumption against speech restrictions. (16) On May 31, 2016, Facebook, Microsoft, Twitter, and YouTube entered into an agreement with the European Commission to remove "hateful" speech within twenty-four hours if appropriate under terms of service. (17) Six months later, the same companies announced plans for a shared database of banned extremist content for review and removal elsewhere. (18)

Nearly a decade later, European lawmakers accomplished what Senator Lieberman could not. (19) By insisting upon changes to platforms' speech rules and practices, EU regulators have exerted their will across the globe. Unlike national laws that apply only within a country's borders, terms of service apply wherever platforms are accessed. (20) Similarly, whereas local courts can only order platforms to block material accessed in their jurisdiction, the industry database has the potential to result in worldwide censorship.

All of this might enjoy some justification if EU regulators focused their efforts on speech proscribed in their countries. (21) But this has not been the case. Calls to remove hate speech have quickly ballooned to cover expression that does not violate existing European law, including "online radicalization" and "fake news." (22) EU officials have pressed a view of hate speech that can be extended to political dissent and newsworthy developments. At risk is censorship creep on a global scale.

Scholarship has explored how formal legal requirements and informal government pressure can result in collateral censorship, the silencing of private actors by other private actors. (23) Free speech scholar and Yale Information Society Project founder Jack Balkin recendy warned:

Currently the Internet is mostly governed by the values of the least censorious regime--that of the United States. If nation states can enforce global filtering, blocking, and delinking, the Internet will eventually be governed by the most censorious regime. This will undermine the global public good of a free Internet. (24) The assault on the "global public good of a free Internet" is already underway. As this Article shows, digital expression is conforming to EU speech norms with extremist and hateful speech as catalysts.

This Article has three parts. Part I exposes the pressure facing technology companies to tailor their speech policies to EU norms. As Part I shows, Silicon Valley's recent retreat from a strong commitment to free speech has more to do with compulsion than choice. Part II explores the fallout, highlighting the risk of censorship creep on a global scale. Part III offers safeguards designed to contain extralegal pressure for the good of free expression.

  1. THE EU'S POWER OVER PRIVATE SPEECH RULES

    After a spate of deadly terror attacks and hate crimes in 2015, European lawmakers told social media companies that they were partly to blame for the violence. (25) In their view, online platforms had enabled violent extremists by giving them access to potential recruits. (26) European lawmakers warned companies that they would face onerous criminal and civil penalties unless online extremism was eliminated. (27)

    After the Charlie Hebdo attack, French President Francois Hollande called for legislation that would make social media platforms criminally liable for users' "extremist" content. (28) French Interior Minister Bernard Cazeneuve followed that warning with meetings in Silicon Valley. (29) Discussions with tech executives bore some fruit: several companies agreed to continue removing terror-related content. (30) As this Part explores, this was just the start of Silicon Valley's concessions to European regulators.

    1. Code of Conduct

      On December 3, 2015, the European Commission (31) established the European Internet Forum (the "Forum"). (32) The goal was the development of "a joint, voluntary approach" for the detection and removal of "online terrorist incitement and hate speech." (33) Participants included European officials, Europol, and tech companies Facebook, Microsoft, Twitter, and YouTube ("Tech Companies"). The European Commissioner for Migration, Home Affairs, and Citizenship remarked:

      Terrorists are abusing the internet to spread their poisonous propaganda: that needs to stop. The voluntary partnership we launch today with the internet industry [aims] to address this problem. We want swift results. This is a new way to tackle this extremist abuse of the internet, and it will provide the platform for expert knowledge to be shared, [and] for quick and operational conclusions to be developed .... (34) The Forum produced results in short order. On May 31, 2016, the European Commission announced an agreement with the Tech Companies entitled "Code of Conduct on Countering Illegal Hate Speech Online" ("hate-speech agreement" or "the Code"). (35) The Tech Companies agreed to prohibit "hateful conduct," defined as speech inciting violence or hatred against protected groups. (36) Reports of hate speech would be reviewed within twenty-four hours and removed if the speech violated companies' terms of service. (37) The European Commission made clear that it would conduct periodic reviews of the Tech Companies' compliance with the hate-speech agreement. (38) The European Commissioner for Justice, Consumers and Gender Equality Vera Jourova hailed the hate-speech agreement as essential to combating the use of social media to "radicalize young people and to spread violence and hatred." (39)

      In December 2016, the European Commission issued its first assessment of the Tech Companies' handling of hate-speech reports, and the feedback was not positive. Over a six-week period, twelve organizations, working on behalf of the Commission, reported alleged incidents of hate speech and tracked the companies' response. (40) The European Commission criticized the Tech Companies' "success rate"--the number of requests resulting in removal--and timeliness. (41) Only forty percent of hate-speech reports were reviewed in twenty-four hours, and twenty-eight percent of speech reported as hateful conduct was removed. (42)

      In the estimation of the European Commission, the Tech Companies had fallen short on their commitments. Jourova warned that if the Tech Companies "want to convince me and the ministers that the non-legislative approach can work, they will have to act quickly and make a strong effort in the coming months." (43) In other words, more hateful speech needed to be removed, and faster, or else. (44)

    2. Blacklist Database

      EU authorities have been in contact with social media companies about terrorist groups' use of their services for the past seven years. (45) For years, the contact proceeded on an ad hoc basis with law enforcement asking companies to take down content. (46) The United Kingdom established a Counter Terrorism Internet Referral Unit (CTIRU) to identify and report "violent and nonviolent extremism" to online platforms. (47) From 2010 to 2015, CTIRU secured the removal of 249,091 pieces of terrorist-related content. (48) According to UK officials, CTIRU had no need to file formal notice and takedown requests because tech companies were so cooperative. (49)

      Given the success of the CTIRU's efforts, Europol established its Internet Referral Unit, described as a "partnership [ ] with the private sector (to promote 'self-regulation' by online service providers)." (50) Ninety-one percent of the content reported has been removed. (51)...

To continue reading

FREE SIGN UP