WHO MODERATES THE MODERATORS? A LAW & ECONOMICS APPROACH TO HOLDING ONLINE PLATFORMS ACCOUNTABLE WITHOUT DESTROYING THE INTERNET.

AuthorManne, Geoffrey A.
  1. INTRODUCTION / OVERVIEW

    A quarter-century since its enactment as part of the Communications Decency Act of 1996, a growing number of lawmakers have been seeking reforms to Section 230. (1) In the 116th Congress alone, 26 bills were introduced to modify the law's scope or to repeal it altogether. (2) Indeed, we have learned much in the last 25 years about where Section 230 has worked well and where it has not.

    Section 230 contains two major provisions: (1) that an online service provider will not be treated as the speaker or publisher of the content of a third party, and (2) that online service providers will not be liable for actions taken to moderate third-party content hosted by their services. (3) In essence, Section 230 has come to be seen as a broad immunity provision insulating online platforms from liability for virtually all harms caused by user-generated content hosted by their services, including when platforms might otherwise be deemed to be implicated because of the exercise of their editorial control over that content.

    To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms, provided it can be done so at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability?

    The debate over Section 230 reform is often framed as a binary choice: to maintain the statute as it is or to repeal it entirely. (4) But those are not, in fact, the only options. Various reform proposals each offer pieces of a useful approach, but few propose a holistic path forward. To mitigate truly harmful conduct on Internet platforms more optimally, we believe, first, that Section 230 immunity should be conditioned on a duty-of-care standard that obliges service providers to reasonably protect their users and others from the foreseeable illegal or tortious acts of third parties. But this alone would be deficient: adding an open-ended duty of care to the current legal system could generate a volume of litigation that few, if any, platform providers could survive. Thus, second, we believe that any new duty of care would need to be tempered by procedural reforms designed to ensure that only meritorious litigation survives beyond a motion to dismiss.

    The crucial question is whether any proposed reform could pass a cost-benefit test--that is, whether it is likely to meaningfully reduce the incidence of unlawful or tortious online content while sufficiently addressing the objections to the modification of Section 230 immunity, such that its net benefits outweigh its net costs. (5) "The general problem remains one of selecting the mix of direct and collateral enforcement measures that minimizes the total costs of misconduct and enforcement." (6) There is no reason to think this is impossible. While many objections to Section 230 reform are well-founded, they also frequently suffer from overstatement or insufficiently supported suppositions about the magnitude of harm. (7) At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself. (8)

    In what follows, we offer our analysis of these objections, as well as some proposals to reform Section 230 that, we believe, appropriately address the stated concerns and suggest a viable path forward. (9) These proposals are a working draft of what we believe may be the best way forward, but, more importantly, they reflect how this paper's framework for assessing online intermediary liability can offer new insights and potential solutions to seemingly intractable problems. Many may challenge how well our suggestions navigate the relevant tradeoffs, but the overarching point of this exercise is to demonstrate how we should be negotiating the tradeoffs embedded in Section 230 and its reform.

    Of central importance to the approach taken in this paper, our proposals presuppose a condition frequently elided by defenders of the Section 230 status quo, although we believe nearly all of them would agree with the assertion: that there is actual harm--violations of civil law and civil rights, violations of criminal law, and tortious conduct--that occurs on online platforms and that imposes real costs on individuals and society at-large. (10) Our proposal proceeds on the assumption, in other words, that there are very real, concrete benefits that would result from demanding greater accountability from online intermediaries, even if that also leads to "collateral censorship" of some lawful speech."

    We use the word "censorship" intentionally. The clearest (but not the only) tradeoff in requiring online intermediaries to police more content is the loss of speech that may accompany it. (12) We also use the term for another reason: as suggested below, most defenders of the Section 230 status quo who fail to meaningfully address the potential benefits of more stringent restrictions on unlawful third-party content believe the costs of infringing free speech to be so high that they cannot possibly be justified by corresponding benefits. They are, in other words, free-speech absolutists. This is not our position, though we are staunchly defensive of free-speech rights and count the prospect of lost opportunities for user-generated speech as a significant potential cost of any limitation on Section 230 immunity. (13) Depending how speech is weighted in the calculus, some may conclude that the benefits of our proposed approach are not worth the costs. That is a tenable position. What is not tenable, however, is to disregard the benefits of reduced immunity, or to implicitly value speech as infinitely valuable, such that no benefit could ever be great enough to compensate for any reduction in speech.

    Of course, even free-speech absolutists sometimes acknowledge that reform efforts may entail such a tradeoff. Indeed, in 2019, a group of 53 academics and scholars and 28 civil-society groups, including some of the staunchest defenders of online speech, proposed a set of "Principles for Lawmakers" to guide potential Section 230 reform. (14) These principles implicitly recognized the tradeoff, inasmuch as they acknowledged a theoretical path for reform and offered a framework to assess any proposed reforms. The top-level principles are:

    Principle #1: Content creators bear primary responsibility for their speech and actions;

    Principle #2: Any new intermediary-liability law must not target constitutionally protected speech;

    Principle #3: The law shouldn't discourage Internet services from moderating content;

    Principle #4: Section 230 does not, and should not, require "neutrality";

    Principle #5: We need a uniform national legal standard;

    Principle #6: We must continue to promote innovation on the Internet; and

    Principle #7: Section 230 should apply equally across a broad spectrum of online services. (15)

    The goal of these principles is to preserve, as much as possible, the social gains that the Internet has provided, while directing any reform efforts toward targeting valid and well-defined harms. Thus, practical reforms that introduce harm-mitigation measures should also adopt appropriate constraints to adequately protect speech, encourage moderation, promote innovation, and avoid unnecessary administrative burdens.

    Any reform efforts must begin with Principle #1: Content creators bear primary responsibility for their speech and actions. Obviously, platforms can and should be held responsible for their content when they are acting as content creators, as is the case under Section 230 today. (16) But holding platform users responsible means acknowledging that platforms may sometimes shield users from responsibility. It also means acknowledging that, in holding users responsible, it may be necessary to address the relationship between users and platforms, and not only the relationship between users and victims:

    Third-party enforcement of any sort serves as a possible answer when deterrence fails because "too many" wrongdoers remain unresponsive to the range of practicable legal penalties. Direct deterrence is the normal strategy for enforcing legal norms.... But of course direct deterrence sometimes fails for reasons that follow from its fundamental assumptions. It may fail because wrongdoers lack the capacity or information to make self-interested compliance decisions.... Yet, a more important source of failure is often the sheer cost of raising expected penalties high enough to deter wrongdoers. ... Of course, these constraints on direct deterrence do not necessarily imply a need for supplemental enforcement measures. Alternative measures are justified only if they, in turn, can lower the total costs of direct enforcement and residual misconduct. (17) Key to this acknowledgement is the basic rule that people respond to incentives. Conduct harmful to others is rarely deterred without external forces to provide those incentives. Sometimes, these forces take the form of inchoate social norms; sometimes, they are implicit threats of reprisal; sometimes, they are threats of law enforcement or civil liability. But arguably, the incentives offered by each of these forces is weakened in the context of online platforms. Certainly, everyone is familiar with the significantly weaker operation of social norms in the more attenuated and/or pseudonymous environment of online social interaction. (18) While this environment facilitates more legal speech and conduct than in the offline world, it also facilitates more illegal and tortious speech and conduct. Similarly, fear of reprisal (i.e., self-help) is often...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT