Pharming Out Data: A Proposal for Promoting Innovation and Public Health through a Hybrid Clinical Data Protection Scheme.

Author:Gulotta, Lea M.
 
FREE EXCERPT

TABLE OF CONTENTS I. INTRODUCTION TO DRUG DEVELOPMENT AND THE PROBLEM OF CLINICAL DATA PROTECTION 1470 II. DEFINING CLINICAL DATA AND EXPLORING THE INCREASING NEED FOR HARMONIZED CLINICAL DATA PROTECTION 1477 III. THE CURRENT STATE OF CLINICAL DATA PROTECTION 1483 A. General Methods of Clinical Data Protection 1484 B. Single-Country Statutes, Regulations, and Guidelines 1487 C. The Trans-Pacific Partnership 1488 D. NAFTA: Do We Hafta? (and Other Free Trade Agreements) 1489 E. The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) 1492 IV. THE BENEFITS AND DRAWBACKS OF COMPREHENSIVE CLINICAL DATA PROTECTION 1495 V. SOLUTION 1500 A. Data Exclusivity as the Gold Standard for Clinical Data Protection 1501 B. A Hybrid Approach: Pro Rata Data Exclusivity and Cost Sharing 1503 C. How Do We Get There? 1506 VI. CONCLUSION 1508 I. INTRODUCTION TO DRUG DEVELOPMENT AND THE PROBLEM OF CLINICAL DATA PROTECTION

The pharmaceutical industry is a formidable beast, both in the United States and worldwide. In 2014, an estimated 48.9 percent of people in the United States had used at least one prescription drug in the past thirty days. (1) This represents a marked increase from just two decades earlier, when only 37.8 percent of survey respondents answered the same question in the affirmative. (2) Globally, pharmaceutical revenues exceeded $1 trillion for the first time in 2014, and have continued to increase accordingly, with US-based pharmaceutical companies accounting for approximately half of the market revenues. (3) In 2017, the world pharmaceutical market value was estimated to be $1.2 trillion. (4)

However, drug development is also a highly risky business. It can cost anywhere from $650 million to $2.7 billion to put a drug on the market in the United States. (5) When as few as one in ten thousand compounds eventually becomes a marketable product--only twenty-two novel drugs were approved by the Food and Drug Administration (FDA) in 2016 (6)--it would be smarter to buy a lottery ticket. Further, although pharmaceutical-company expenditures on research and development continue to grow, only 30 percent of the drugs that do make it to market recoup enough money to meet or exceed the average cost of development. (7)

To truly understand the costs involved in getting a drug to market, it is also important to understand the drug development process. (8) The first step is researching in a laboratory to discover new compounds or find new uses for drugs that are already approved. (9) Molecules that show a potential therapeutic indication move on to the preclinical research stage, the goal of which is to determine the maximum dosage that can be given to humans without toxicity, if such a dose exists. (10) Preclinical research can be conducted in vitro, but is most commonly done using animals such as mice, rats, dogs, and monkeys. (11) The FDA requires a specific set of preclinical studies before drugs may be tested on humans. (12)

After the requisite preclinical studies are conducted and maximum safe dosages are determined, pharmaceutical companies file an Investigational New Drug (IND) application with the FDA. (13) In the application, the developer must include data from preclinical research, including information about toxicity and manufacturing, plans for future human clinical trials, and data from any prior human research involving the drug. (14) If the FDA is satisfied that the drug meets federal standards, the pharmaceutical company is allowed to proceed to clinical trials, where it administers the drug to humans. (15)

Phase I clinical trials are small in size and short in duration, and often involve healthy volunteers so scientists can assess safety, proper dosage, and potential side effects. (16) Approximately 70 percent of drugs advance to Phase II clinical trials, where several hundred patients with the target condition are given the drug for a few months to two years, depending on the particular trial. (17) The purpose of these trials is to determine whether the compound has the desired effect on the target disease, and if there are any new side effects in the target population. (18)

Thirty-three percent of molecules move on to Phase III clinical trials, which are designed to assess long-term effects of the drug in the target population. (19) Accordingly, these trials usually last between one and four years, with an average trial length of approximately twenty months. (20) After Phase III trials, the drug developer may file a New Drug Application (NDA) with the FDA, seeking approval to market the drug to the public. (21) The NDA includes reports on all studies and data collected up to the point of filing, as well as proposed labeling, patent information, and data from studies that were conducted outside of the United States. (22) The last phase of clinical research, Phase IV, assesses safety and efficacy in several thousand volunteers with the indicated condition. (23)

Although consumers often take for granted that their medications will be safe and will not cause excessive side effects, some may nevertheless view such extensive drug-development requirements as burdensome and overly expensive. However, a quick examination of the history behind the advent of clinical trials makes it clear why close regulation of pharmaceutical testing is necessary.

The birth of the modern clinical trial scheme goes back to the 1960s, when a German pharmaceutical company internationally marketed a sedative called thalidomide. (24) The product was advertised as "completely safe," even if used by pregnant women. (25) Doctors began to prescribe thalidomide not only as a sedative, but also as a treatment for morning sickness in expectant mothers, an off-label use discovered by an Australian physician. (26) This turned out to be a tragic mistake. Despite the manufacturer's assurances, thalidomide was far from safe for pregnant women; in fact, it interfered with fetal development and caused severe birth defects, most commonly resulting in babies born with shortened or absent limbs. (27) The clinical trials for thalidomide were cursory at best--they involved over a thousand physicians administering the drug to twenty thousand patients, and did not require that the patients be tracked after taking the medication. (28)

Prior to these events, the United States had passed the Food, Drug, and Cosmetic Act (FDCA) of 1938, which required drug manufacturers to show that a drug was safe, and to submit an application to the FDA before the drug could be put on the market. (29) The statute was enacted in the wake of another catastrophe where a Tennessee drug company marketed Elixir Sulfanilamide, a product chemically similar to antifreeze that killed over one hundred people. (30) However, the FDCA had a major flaw: if the FDA did not act on an application within a certain time period, it would automatically become approved, and therefore the requirement of proving safety did not serve as a sufficient gatekeeper to keep unsafe medicines off the market. (31)

The United States Congress was spurred to address this problem in the wake of the devastation caused by thalidomide. It enacted the Kefauver Harris Amendments to the FDCA, which required drug manufacturers to prove the efficacy and safety of a product before the FDA would approve it. (32) The Amendments specified that evidence of effectiveness must be based on "adequate and well-controlled clinical studies conducted by qualified experts." (33) Further, they corrected the gap in the FDCA, giving the FDA only 180 days to approve a new drug application, at which point the application would be considered denied. (34) The legislative bodies of the European Union (EU) and the United Kingdom (UK), among others, also enacted statutes overseeing clinical trials in the wake of the thalidomide scandal. (35)

It is important to regulate the actual clinical trials for ethical reasons as well. The greatest illustrator of this need is a forty-year study conducted by the United States Public Health Service that began in 1932, titled the "Tuskegee Study of Untreated Syphilis in the Negro Male." (36) Researchers told the six hundred subjects--all poor, black men from rural Alabama--that they were being treated for "bad blood," a term used at the time to describe ailments such as anemia or fatigue. (37) In reality, the government planned to observe the effects of syphilis and study the body after it succumbed to the disease. (38) Despite promises from the researchers, the patients never received the necessary treatment, even after scientists discovered penicillin could effectively combat syphilis in 1947. (39)

In the wake of the Tuskegee study, the United States enacted the National Research Act, which requires researchers to obtain voluntary consent from all persons taking part in studies or clinical trials and provides for oversight of any study using human subjects. (40) Internationally, the World Medical Association put forth the Declaration of Helsinki as a statement of ethical principles for conducting medical research with human patients. (41) These actions represent an international consensus that extensive regulation of clinical trials and drug development is necessary given the ethical, health, and safety concerns inherent in the process.

But everything costs money, and tight regulation makes drug development an extremely expensive industry. In the United States, a single clinical trial may cost anywhere from $1 million to $53 million. (42) The entire drug development process may run a pharmaceutical company between $650 million and $2.7 billion. (43) The increasing cost of clinical research worldwide has serious implications. On the whole, the pharmaceutical industry has become more risk averse and, therefore, less willing to develop novel compounds or specialized drugs with smaller potential markets. (44) Clinical research facilities are similarly wary, carefully selecting...

To continue reading

FREE SIGN UP