LETTERHEAD BIAS AND THE DEMOGRAPHICS OF ELITE JOURNAL PUBLICATIONS.

Author:Thomson, Stephen
 
FREE EXCERPT

TABLE OF CONTENTS

  1. INTRODUCTION 204 II. METHODOLOGY 207 III. LETTERHEAD BIAS 210 IV. SELF-PUBLICATION 221 V. THERE'S A WHOLE WORLD OUT THERE: OVERSEAS 236 AUTHORS VI. PRACTITIONERS' PREDICAMENT: ACADEMIC, 245 PRACTITIONER, AND JUDICIAL AUTHORSHIP VII. TO SHARE OR NOT TO SHARE?: SOLE AUTHORSHIP AND 254 CO-AUTHORSHIP VIII. DIGEST AND CONCLUSION 258 ANNEX 1: AUDITED LAW REVIEWS AND JOURNAL MEDIAN 264 ASSIGNED SCORES ANNEX 2: U.S. NEWS RANKINGS 2019 AND INSTITUTIONAL 266 ASSIGNED SCORES I. INTRODUCTION

    Rumors abound in corridors, faculty lounges, and "angsting threads" (1) across the U.S.: what can I do to maximize my chances of placing an article in a top journal? Should I target some journals while avoiding others? Will my institutional affiliation help or hinder my chances of publication? Do some journals largely act as a vehicle for publishing their own faculty members' articles? Are my chances reduced by being a practitioner, and are there some more practitioner-friendly outlets? Are sole or co-authored articles preferred? Do overseas scholars have a realistic chance of placing their article in a top U.S. journal? Academics are right to survey their prospects and strategize to improve them, for publication achievements strongly influence hiring, promotion, tenure, and performance appraisal. It seems that everyone has a theory or view to express on these questions, though rarely, if ever, are these based on hard data. (2)

    This Article provides answers to such questions based on hard data, in the largest and most extensive audit of U.S. law journal articles ever undertaken, surveying the work of over 4,500 articles and almost 6,000 authors. Hard data is desirable because "[v]ery rarely... has the criticism [of law reviews] been supported by anything more systematic than anecdotal evidence." (3) Statistically grounded analysis, if properly conducted, provides a more methodologically robust foundation than anecdotes on which to assert claims of dysfunctionality. In addition to being one of the few analyses of the U.S. law journal system based on hard data, (4) and being the largest audit of U.S. law journals ever undertaken, this study has the added value of being conducted by an "outsider": a non-U.S. legal academic for whom publishing legal articles in peer-reviewed journals is the norm and for whom the student-edited model appears foreign and eccentric. This author therefore brings a more objective perspective, with no institutional links or "axe to grind" in the U.S. system, as well as increased awareness of the special challenges faced by overseas authors that have not, hitherto, been documented by many of the commentators who have written in this field. (5) Furthermore, this study penetrates the macro-level analysis by identifying specific journals and does not shy away from highlighting anomalous or objectionable practices where they arise. It is therefore distinguishable from previous studies as more extensive, comprehensive, and objective than its predecessors. It is also, unlike most of its predecessors, not merely survey-based. (6)

    The study presents and discusses the findings of an audit of the top fifty U.S. law journals ("T50"), as defined by the Washington and Lee Law Journal Rankings, over a period of five calendar years from 2014 through 2018. It documents the trends and patterns in the demographics of 5,791 authors across 4,593 articles. The research provides statistical evidence of "letterhead bias" (7) and demonstrates that the phenomenon intensifies at higher-ranked law schools. It also reveals that some journals publish a disproportionate number of their own faculty's articles, with the practice most egregious at the Virginia Law Review, New York University Law Review, and Harvard Law Review. This practice is, likewise, most intense at higher-ranked journals. Overseas authors have particularly low prospects of publishing in a T50 journal, and there is little correlation between journal ranking and the extent to which a given journal publishes practitioner-authored articles. Finally, a greater proportion of co-authored articles tend to feature in higher-ranked than lower-ranked journals.

    The significance of this article's data-centric approach is that rumors, anecdotes, and theories can be grounded--or dispelled--on the basis of verifiable, quantitative data. The data presented does, however, reveal some hard truths both for aspiring authors and editorial boards across the U.S., and further questions the integrity and credibility of the student-edited journal model. The article does not suggest that student editors are, across the board, engaged in improper conduct, or that deserving articles are never published in top journals. Rather, it exposes and highlights flaws that can enable stakeholders to push more forcefully for changes that will make the law review market more open, fair, and transparent. The Article concludes that the only feasible solution is for blind review to be universally adopted in journals' selection processes. As faculty members in the U.S. and beyond depend on publication credentials for performance-related pay, hiring, promotion, or tenure, (8) the more objectionable the data shows the journal system to be, the greater should be their impetus to demand change. As the broader legal community relies on universities to maximize the quality of their journal scholarship, the more that publication decisions seem to be based on criteria other than merit, (9) the greater the entitlement of the community to feel cheated. It is time that changemakers were armed with the data they need to demand better of law reviews.

  2. METHODOLOGY

    The study audited the T50 general ("flagship") journals in the 2017 (10) Washington and Lee Law Journal Rankings ("W&L"). A list of the audited journals can be found in In identifying the target journals for the study, selection based on W&L was chosen over selection of the flagship journals of the T50 ranked law schools in the U.S. News and World Report ("U.S. News") rankings, (12) as W&L is a journal ranking rather than an institutional ranking. While the "consensus" is that legal academics rank journals according to U.S. News, (13) W&L provides hard data on article impact and citations, and thus serves as a more quantified and scientific ranking of journals. (14) As Timothy T. Lau observed, there are various problems with using the U.S. News rankings to rank journals themselves, and "by using the U.S. News & World Report rankings of law schools to rank journals, legal academics are taking the rankings far beyond their intended use and essentially are ranking journals based on factors that have little to do with the journals themselves." (15) In any event, the U.S. News rankings retained an important place in this study, as they provided the law school rankings used in the determination of institutional prestige for the purposes of investigating letterhead bias, namely a bias in favor of, or against, an article based on the institutional affiliation(s) of its author(s).

    The audit period covered articles published during the calendar years from 2014 through 2018. An audit period spanning five calendar years offered a wide enough sample while maintaining statistical manageability. (16) In addition, as editorial boards tend to turn over on an annual basis (with the entire journal staff tending to turn over every two years), the idiosyncrasies of a single editorial board (17) should not have undue statistical influence (no more than 20% per journal) on the overall trends for that journal. The audit focused on long-form articles and did not include shorter essays, notes, and case comments. (18) The decision to focus on long-form articles was made because these articles are widely regarded as the most influential and significant contributions published by journals. They are--by contrast with other forms of journal output--where most original scholarship is presented to the world; they are the primary venue where new arguments, theories, and evidence are set out; and, importantly, they are also what tend to count most for the performance appraisal, promotion, tenure, and hiring decisions to which academics are subject. For example, a long-form article published in a prestigious journal may be the difference between obtaining or failing to obtain tenure, whereas a case comment in a prestigious journal will often count for little at all. The separation of long-form articles from shorter pieces was fairly straightforward for most journals. Online-only work was also excluded from the scope of the study, as that is not (yet) considered to carry the same weight of reputation, credibility, and consequence as work published in print sources.

    Finally, the audit excluded symposium issues. It is generally understood, particularly in the U.S., that publication of an article in a symposium edition is less prestigious than the publication of an article in a non-symposium edition of the same journal. This difference is likely to do with the mechanics of symposium editions, which may--though will not always--comprise a polished version of papers presented at a symposium, seminar, or conference. All else being equal, it will typically be easier to secure an invitation to present at an event of that nature (thus potentially guaranteeing or significantly boosting one's prospects of publishing in the associated journal) than to secure a publication spot in a non-symposium edition of the same journal. Accordingly, the study excluded symposium editions from the scope of the audit, as they might skew the data. With these parameters applied to the T50 journals, there were, as noted, 4,593 articles audited, representing the work of 5,791 authors.

    The study investigated a number of factors, of which the following are discussed and presented in this article. First, it examined the correlation between the W&L journal ranking and the median U.S. News ranking of the...

To continue reading

FREE SIGN UP