Abstract

An event study is a statistical method for determining whether some event--such as the announcement of earnings or the announcement of a proposed merger--is associated with a statistically significant change in the price of a company's stock. The main inputs to an event study are historical stock returns for the companies under study, benchmark returns like the return to the broader stock market, and standard statistical tests like t-tests that are used to test for statistical significance. In securities litigation and regulation, event studies are used primarily to detect the impact of disclosures of alleged fraud on the price of a single traded security.

But are event studies in securities litigation reliable? What is interesting about the use of event studies in securities litigation is that the methodology litigants use in court differs from the methodology that economists apply in their research. With few exceptions, securities litigation event studies are single-firm event studies, while almost all academic research event studies are multi-firm event studies. Multi-firm event studies are generally accepted in financial economics research, and peer-reviewed journals contain them by the hundreds. By contrast, single-firm event studies--the mainstay of modern securities fraud litigation--are almost nonexistent in peer-reviewed journals.

Importing a methodology that economists developed for use with multiple firms into a single-firm context creates three substantial difficulties. First, single-firm event studies suffer from a severe signal-to-noise problem in that they lack statistical power to detect price impacts unless the price impacts are quite large. Inattention to statistical power lowers the deterrent effect of the securities laws by giving a "free pass " to some economically meaningful price impacts and may encourage more small- and mid-scale fraud than is socially optimal given the costs of litigation. Second, single-firm event studies do not average away confounding effects. While this problem is well known, some courts have unrealistic expectations of litigants' ability to quantitatively decompose observed price impacts into those caused by alleged fraud and those unrelated to alleged fraud. Third, low statistical power and confounding effects combine to generate sizeable upward bias in detected price impacts and therefore in damages. To improve the accuracy of adjudication in securities litigation, we suggest that litigants report the statistical power of their event studies, that courts allow litigants flexibility to deal with the problem of confounding effects, and that courts and litigants consider the possibility of upward bias in the detection of price impacts and the estimation of damages.

Table of Contents Introduction I. Difficulty #1: Low Statistical Power A. Abnormal Returns and Type I Error B. Abnormal Returns and Type II Error C. Statistical Significance and Likelihood Ratios D. Low Power and Statistical Insignificance in Case Law E. An Aside on Power and the MFES II. Difficulty #2: Confounding Effects A. The Problem B. Confounding Effects in Case Law III. Difficulty #3: Bias Conclusion INTRODUCTION

An event study is a statistical method for determining whether some event--such as the announcement of earnings or the announcement of a proposed merger--is associated with a statistically significant change in the price of a company's stock. (1) The main inputs to an event study are historical stock returns for the companies under study, benchmark returns like the return to the broader market, and standard statistical tests like t-tests that are used to test for statistical significance. In securities litigation and regulation, event studies are used primarily to detect the impact of disclosures of alleged fraud on the price of a traded security.

After the Supreme Court endorsed the fraud-on-the-market doctrine in Basic Inc. v. Levinson (2) in 1988, event studies became so entrenched in securities litigation that they are viewed as necessary in every case. (3) Based on the efficient markets hypothesis that "the market price of shares traded on well-developed markets reflects all publicly available information, and, hence, any material misrepresentations," (4) securities litigants use the event study to help answer two crucial questions. First, was there a price impact at the time of an alleged misrepresentation or corrective disclosure? Second, if there was a price impact, how much of it was caused by the alleged misrepresentation or corrective disclosure as opposed to other, unrelated factors? (5) In proposing answers to these questions, litigants have not been shy in asserting the event study's impressive academic pedigree. (6) But the methodology that litigants use in court differs from the methodology used in academic research. In particular, securities litigation event studies are almost always single-firm event studies ("SFESs") that examine the price moves of the security of the single firm involved in the litigation, (7) while almost all academic research event studies are multi-firm event studies ("MFESs") that examine large samples of securities from multiple firms. (8) Importing a methodology that economists developed for use with multiple firms into a single-firm context creates three substantial difficulties: low statistical power, confounding effects, and bias.

First, an SFES often has low statistical "power" to detect an economically meaningful price impact, which typically must be at least approximately twice as large as the standard deviation of daily (abnormal) returns for the examined firm. But requiring conventional levels of statistical significance when power is low effectively gives a "free pass" to economically meaningful securities fraud because the SFES simply cannot detect price impacts below a high threshold. Courts, ignoring low power, then conclude that some economically large price impacts are immaterial. Courts err because of their mistaken premise that statistical insignificance indicates the probable absence of a price impact. Overreliance on statistical significance without consideration of statistical power "leads to a decision-making regime in which the probability of an incorrect exoneration far exceeds the probability of an incorrect condemnation." (9) While it is possible that this regime reflects a rational policy judgment, we see no evidence such a judgment has been made deliberately.

Second, when an SFES does detect a price impact, it reflects confounding effects that are unrelated to the alleged fraud. Unfortunately, there is no fully reliable, mathematically precise way to decompose an observed event return in an SFES into component parts: the part related to alleged fraud and the part not related to alleged fraud. Financial economists have long understood that our ability to fully explain observed price moves is quite limited; (10) much price movement occurs for reasons unrelated to news, including as a result of the liquidity trades of investors in the market seeking to raise funds for other purposes and the (at least short-term) impact of "noise traders" who trade for irrational reasons.

Third, low statistical power and confounding effects combine to generate sizeable upward bias in detected price impacts and damages (i.e., overstating the magnitude of a price impact and damages). This upward bias problem means that we cannot leave confounding effects unaddressed in the hope that they are as likely to be on one side of the true price effect as on the other. For example, suppose the true price impact is -2.0%, but the requirement of statistical significance is such that price impacts less severe than -2.94% will be rejected as statistically insignificant. In that case, a price impact will be detected only when there are confounding effects that push the observed price impact past -2.94%. As we show later, the expected detected price impact in such situations is -3.9%, substantially higher than -2.0%, the true price impact.

These problems help explain why the SFES methodology is applied so infrequently in peer-reviewed research. But the same problems have not limited the use of the SFES in securities litigation. Securities litigants use SFESs to show that securities did or did not trade in an efficient market, to establish that alleged misrepresentations did or did not impact the stock price for purposes of materiality and reliance, and to determine the existence or absence of loss causation and amount of damages. We are not the first to point out that SFESs have low statistical power and are subject to problems with confounding effects (though we are, to our knowledge, the first to point out the bias problem in this context). (11) But especially since courts are increasingly required to address price impact evidence at the class certification stage by using event studies, (12) it is time to review the limitations of the single-firm event study in securities litigation-particularly those limitations that arise from low power, confounding effects, and bias--in order to provide courts and litigants with a firmer basis for considering evidence based on single-firm event studies.

In Part I, we explain why the SFES as typically applied in securities litigation has low statistical power, in the sense that it cannot detect price impacts reliably unless they are large. In Part II, we explain the problem of confounding effects. In Part III, we explain how low statistical power and confounding effects combine to generate bias in detected price impacts. We conclude with proposals for improving the accuracy of adjudication involving SFESs. These include requiring litigants to report the power of their analyses, allowing litigants flexibility to address the problem of confounding effects, and encouraging courts and litigants to consider the possibility of upward bias in the detection of price impacts and the estimation of...