The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives.

AuthorVan Doren, Peter

The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives Stephen T. Ziliak and Deirdre N. McCloskey Ann Arbor, Mich.: University of Michigan Press, 2008, 3,52 pp.

How do many scientific disciplines estimate and report results? Practitioners estimate regression models or conduct difference-of-means tests through experiments. And they report which results are significant and which are not (i.e., different from zero with 95 percent confidence). In this important book, Ziliak and McCloskey have three objectives: to remind us that such research may be mindless, unscientific, and costly; to explicate the intellectual history of significance testing and the straggles among those professors who developed sampling and statistical testing; and to illustrate the correct way to conduct research and praise those few who report their research properly.

The Costs and Benefits of Significance Testing

First, a little review of significance testing. The central question in research is what is the effect of some variable of interest on an outcome. In medicine, for example, we want to know the effect of a drug on illness. In economics, we care about the effect of prices on consumption or work choices. To assess those effects, we rarely have data on populations. Instead we have data on only hundreds or sometimes thousands of people.

Researchers must estimate the likelihood that the results from the sample represent the results if the population were studied. The answer depends on the size of the sample and the signal-to-noise ratio in the sample. The smaller the sample and the smaller the signal-to-noise ratio, the lower the likelihood that the sample result is the population result. Said differently, small sample sizes and noisy data increase the variety of possible population results that are logically possible given a particular sample result. In such small, noisy samples, it becomes more likely that observed effects are the result of chance rather than systematic factors.

And then there is the question that actually has no scientific answer: How confident should we be that a result is not the result of chance? This book chronicles the development of the convention that 95 percent likely is likely enough and then the degradation of that convention into what the authors view as its fatally flawed shrivelled version: unless a sample result is different from the result of zero with 95 percent confidence, you have no result at 'all.

What is so odd about the role of statistical significance is how out of character it is for economics. Normally economists preach to other disciplines to...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT