Improving Scientific Judgments in Law and Government: A Field Experiment of Patent Peer Review

DOIhttp://doi.org/10.1111/jels.12249
Published date01 June 2020
AuthorLisa Larrimore Ouellette,Daniel E. Ho
Date01 June 2020
Journal of Empirical Legal Studies
Volume 17, Issue 2, 190–223, June 2020
Improving Scientific Judgments in Law and
Government: A Field Experiment of Patent
Peer Review
Daniel E. Ho*and Lisa Larrimore Ouellette*
Many have advocated for the expansion of peer review to improve scientific judgments in
law and public policy. One such test case is the patent examination process, with numer-
ous commentators arguing that scientific peer review can solve informational deficits in
patent determinations. We present results from a novel randomized field experiment, car-
ried out over the course of three years, in which 336 prominent scientific experts agreed
to provide input on U.S. patent applications. Their input was edited for compliance with
submission requirements and submitted to the U.S. Patent and Trademark Office
(USPTO) by our research team. We show that the intervention caused examiners to
(1) increase search efforts and citations to the non-patent (scientific) literature and
(2) grant the application at lower rates in the first instance. However, results were substan-
tially weaker and resource costs substantially higher than anticipated in the literature,
highlighting significant challenges and questions of institutional design in bringing scien-
tific expertise into law and government.
I. Introduction
One of the principal rationales for government agencies is expertise. Much of that exper-
tise is scientific. Agencies such as the National Institutes of Health (NIH) and the National
Science Foundation (NSF) rely critically on peer review to allocate scientific grants.
Scholars, commentators, and policymakers have also advocated for greater reliance on
peer review in other regulatory domains (Noah 2000; Ruhl & Salzman 2006; Shapiro &
*Address correspondence to Daniel E. Ho and Lisa Larrimore Ouellette, Stanford University, 559 Nathan Abbott
Way, Stanford, CA 94305; email: dho@law.stanford.edu; ouellette@law.stanford.edu. Ho is the William Benjamin
Scott and Luna M. Scott Professor of Law, Professor of Political Science & Senior Fellow at the Stanford Institute
for Economic Policy Research. Ouellette is Associate Professor of Law & Justin M. Roach, Jr. Faculty Scholar,
Stanford Law School.
We thank Cassandra Handan-Nader, Sam Goldstein, Jeff Liu, Anne McDonough, Oluchi Mbonu, Katie
Mladinich,Jason Reinecke, JoshRosefelt, AngelaTeuscher, Collin Vierra, and AlexYu for research assistance, and par-
ticipantsat the IntellectualProperty ScholarsConference and the DukeLaw Faculty Workshopfor helpful comments.
©2020 The Authors. Journal of Empirical Legal Studies publishedby Cornell Law Schooland Wiley PeriodicalsLLC.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use,
distribution and reproduction in any medium, provided the original work is properly cited.
190
Guston 2006), such as food safety (Kessler 1984), environmental protection (National
Research Council 2000), education (National Research Council 2004b), and performance
measurement (Kostoff 1997). In turn, government agencies have increasingly relied on
scientific peer review (Guston 2003) and are required to subject “influential scientific
information” to peer review prior to publication (Office of Management and Budget
2005). Others have challenged the desirability of such expansion, citing potential costs in
delay, weakened public participation, cherry-picking of reviewers, and crowding out of
normative (as opposed to scientific) judgments (Doremus 2007; Grimmer 2005; Fein
2011; Virelli 2009; Wymyslo 2009). Whether peer review functions as intended even within
scientific domains remains empirically unclear (Bornmann, 2011; Cole et al., 1981; Jeffer-
son et al. 2002; Li & Agha 2015; Smith 2006; Rennie 2016; Bohannon 2013).
One area of significant contestation lies in the patent examination system. This sys-
tem determines which innovations receive the legal benefits of a patent and is an arche-
type for government scientific gatekeeping. The five largest patent offices worldwide—in
the United States, Europe, China, Japan, and Korea—collectively employ more than
27,000 patent examiners who are tasked with evaluating over 2.7 million patent applica-
tions filed each year (European Patent Office et al. 2018). These decisions can have mas-
sive implications for science, innovation, and the economy. The U.S. Patent and
Trademark Office (USPTO), for instance, estimates that IP-intensive industries added
$6.6 trillion to U.S. GDP in 2014 (U.S. Patent & Trademark Office 2016).
The patent system is also seen to suffer from significant informational challenges.
As we document below, examiners have limited experience, time, and search capacity.
Many commentators have hence advocated for peer review to improve patent examina-
tion (Noveck 2006; Graf 2007; Kao 2007; Biagioli 2007; Fromer 2009; Ouellette 2012,
2016; Atal & Bar 2014). Yet to date, no rigorous test of peer review has been conducted
for this or any other system of informal adjudication.
1
Our study fills this gap by provid-
ing rigorous causal evidence on the effect of expert patent peer review by external scien-
tific experts. We designed an unexpectedly resource-intensive, three-year-long field
experiment in which top scientific experts provided input on randomly selected pending
U.S. patent applications. Our results show that peer review increased examiner search
efforts and citations to non-patent literature and reduced the propensity to initially grant
the application. That said, the results were surprisingly weaker than the literature has
suggested and highlight profound challenges in bringing scientific expertise into legal
institutions. In particular, a significant time investment was required from our research
team to translate the experts’ input into a form that was compliant with USPTO require-
ments and could be used by patent examiners. As we spell out, our results have consider-
able implications for innovation policy specifically and the expanded use of peer review
in government more generally, where the evidence base is exceptionally thin (Ho 2017;
Ho & Sherman 2017).
1
In administrative law, “informal adjudication” refers to a proceeding in which an agency determines a party’s
rights or liabilities without the requirement of a hearing on the record. The vast majority of administrative adjudi-
cation is informal.
Improving Scientific Judgments in Law and Government 191
II. Institutional Setting
We first provide details on the institutional setting of the USPTO. These institutional
constraints explain why so many commentators have argued that external peer review
can address core problems of patent quality.
The USPTO employs more than 8,000 patent examiners. Their principal responsi-
bility is to determine whether each legal “claim” in an application is novel and
nonobvious in light of earlier publications (“prior art”), and whether the application dis-
closes sufficient information about how to make and use the claimed invention. The bur-
den is on the patent examiner to identify a proper legal basis for rejecting a patent
claim; otherwise, it must be allowed. When the examiner does reject a claim, the appli-
cant can respond (over an indefinite number of rounds) with either legal arguments or
amendments to the claim.
There are several reasons to believe that scientific input may benefit patent deter-
minations. First, many examiners have little experience in the technical fields they exam-
ine (National Research Council 2004a). Only a bachelor’s degree in science or
engineering is required, even though applications present innovations at the forefront of
scientific fields.
2
Due to high attrition, most examiners at the USPTO have been there
for less than four years (Lemley & Sampat 2012). Second, it is well known that patent
examiners are less adept at drawing on non-patent scientific literature (Lemley & Sampat
2012), despite this literature constituting the primary basis for reporting scientific find-
ings. Third, examiners have limited time to review applications. On average, an examiner
has 19 hours to review an application, research prior art, and write rejections and
responses to the applicant’s arguments (Frakes & Wasserman 2017). Applications must
be granted if examiners cannot identify a proper basis for rejection within this time
window.
Due to these constraints, patent examination faces significant quality-control prob-
lems, particularly with improperly granted patents (National Research Council 2004a;
Frakes & Wasserman 2017). As an indicator of this quality problem, the likelihood that a
patent will be granted depends heavily on the (quasi-randomly assigned) examiner
(Sampat & Williams 2019).
These institutional constraints explain why rounds of scholars have argued for peer
review in patent examination (Noveck 2006; Graf 2007; Kao 2007; Biagioli 2007; Fromer
2009; Ouellette 2012, 2016; Atal & Bar 2014). Just as reviewers for scientific journals can
help editors by identifying prior publications that undermine the asserted novelty of a
manuscript, external scientific experts may be able to help patent examiners by identify-
ing the most relevant prior art, leading to greater accuracy and consistency in
2
Examiners’ education levels vary across technologies; for example, examiners for biotechnology and organic-
chemistry-related inventions are more likely to hold master’s or doctoral degrees (Vishnubhakat and Rai 2015).
Our experiment is not sufficiently powered to examine the effects of peer review in different technology classes,
but this is an important question for future work.
192 Ho and Ouellette

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT