Feminist law journals and the rankings conundrum.

AuthorGrossman, Joanna L.
PositionWhy a Feminist Law Journal?

Spring is a fitting season to discuss the relationship between rankings and feminist law journals, as U.S. News and World Report prepares to release its annual rankings of American law schools. Decried by many academics as arbitrary and unfair, (1) these rankings nonetheless exert an extraordinary influence on law school life. (2)

Rankings of this sort not only drive decisions made by schools (which students to recruit, admit, and, most importantly, entice with scholarship money) and by applicants (where to apply, where to matriculate, and whether to transfer), but also decisions that are more central to the academic enterprise. Law school deans and faculties make decisions about resource allocation, faculty hiring, curriculum, and the like, against the backdrop of rankings, which at least indirectly place a point value on each such decision.

Even what one thinks of as the purely academic side of the law school enterprise--faculty research and writing--is not immune. Far from it. Faculty members clearly make choices about where to publish, and perhaps even what to write and whom to cite, based in part on published rankings, which deem schools and individuals "productive," "prolific," or "academically reputable" based in part on those choices. It is here that the conflict between rankings and feminist law journals comes into sharper focus. Each individual set of rankings--bar none--creates a disincentive to publish in feminist law journals.

  1. THE INFLUENCE OF LAW SCHOOL RANKINGS ON PUBLISHING

    To focus on disincentives to publish in feminist law journals is perhaps antithetical to the goals of feminist legal theory and the greater feminist enterprise, and yet the power they exert over faculty decision making cannot be ignored. Rankings affect individuals and schools at all ends of the spectrum, but they afflict faculty members without tenure and schools without a securely "elite" status most acutely.

    There are two sets of rankings of law schools that are regularly discussed (even if only to criticize) in law school circles: U.S. News & World Report, (3) a national magazine that publishes an annual ranking of graduate schools and colleges, and Brian Leiter's Educational Quality Rankings (EQR), (4) a relatively new web-published report issued every two years. (5) Both of these sets of rankings have the potential to affect faculty publishing decisions, as do a variety of narrower studies that rank law reviews or faculty productivity in isolation, without ranking law schools generally. (6)

    For the U.S. News survey, twenty-five percent of a law school's overall ranking is derived from its reputation among academics, measured by an opinion survey sent to four designated faculty members at all ABA-accredited law schools. (7) This measure is obviously subjective, and the respondents are not given any information about each school they are asked to assess, but instead given only the name of the school and asked to assign a rating between one and five taking into account a variety of factors that may bear on its academic reputation. (8)

    Given this methodology, it is difficult to quantify the effect a faculty's publishing record may have on the responses, though it is reasonable to hypothesize some connection. There must be some explanation for the increasingly expensive, glossy, and exaggerated brochures that law schools send out by the thousands to tout the publishing successes of their faculty members. (One law school's alumni magazine was deemed such a "shamelessly self-promoting publication" to earn the label, given by an anonymous professor, "law pore." (9)

    The opinions held by those surveyed are almost certain to be influenced by conventional notions of prestige with respect to article placements. And respondents are likely to know more about the publishing records of faculty at different schools than they a re about the number of books in each library or the job placement success of their competitors' graduates. The survey thus potentially influences the decisions faculty make about where to publish articles.

    It is at least conceivable, though, that U.S. News respondents base their opinions on a broader array of factors; indeed, the survey itself gives a varied list of factors that they might consider in assigning scores to each school. But U.S. News has been criticized on this exact point: for failing to provide a precise or objective measure of academic reputation or faculty quality, or indeed for failing to focus on it at all. (10) It is this criticism in part that enabled University of Texas Law Professor Brian Leiter to make an arguably successful launch of his EQR site, which promised law students a more objective measure of law school quality than U.S. News. (11)

    In its first iteration, the EQR site departed from U.S. News in two significant ways. First, seventy percent of a school's overall ranking was derived from "faculty quality" (a much more significant percentage than U.S. News). Second, it gave equal weight to subjective measures of faculty quality and objective ones. T he subjective measures were taken from t he academic reputation scores in the U.S. News survey. The objective measures of faculty quality were based on the faculty's frequency of citation and per capita rate of publication.

    The real blow to feminist law journals is in this last measure. "Per capita rate of publication," as defined by the EQR, gives faculty members credit only for publications in t he top ten law reviews, the top ten peer-edited law journals (none of which are feminist law journals), and books from a handful of top academic and law publishers. An article in any feminist or women's law journal in the objective portion of the EQR literally counts for nothing.

    Beginning with the 2003-04 rankings, the EQR changed its subjective measure of faculty quality, replacing the U.S. News reputation scores with surveys of "leading" junior and senior scholars in law schools. This subjective ranking methodology differs from U.S. News' academic reputation survey in a few ways: it surveys only "active and distinguished" scholars, rather than surveying designated people at every school; it provides respondents with a list of faculty at each school and lists each school by number rather than name in order to avoid undue influence of preconceived notions about the quality of a particular school; and it seeks responses from participants with differing levels of seniority and diverse academic specialties. (12) For now...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT