In recent years, biomedical research has become increasingly collaborative (Falk-Krzesinski et al., 2011; Wuchty, Jones, & Uzzi, 2007). Today's large research challenges such as global climate change and the early detection of cancer can only be addressed in large, multi-site, multi-disciplinary collaborative efforts, as they require the input of scientists from disciplines as disparate as epidemiology, ecology, sociology, clinical medicine, molecular biology, population genetics, and veterinary medicine. The development of information and communication technologies (ICTs) has allowed scientists to work together in larger numbers, on increasingly complex problems, over ever greater distances. Such large collaborative projects bring together scientists from different labs, different disciplines, and different institutions, generally bringing all these disparate elements together into a functioning whole. Yet this collaboration comes at a cost. Coordinating large numbers of dispersed researchers working on such complex questions across geographic and institutional boundaries requires a substantial commitment of time and resources (Cummings & Kiesler, 2007). This administrative burden often falls on the lead Principal Investigator (PI) and his/her staff.
In the field of cancer epidemiology, multi-site research projects are increasingly employing coordinating centers (CCs) as a tool to ease that administrative burden by offloading it onto a group with substantial experience in the coordination of such projects (Rolland, Smith, & Potter, 2011). A CC is a central body tasked with coordination and operations management of a multi-site research project. We call this type of collaborative science "Coordinated Collaborative Science," defined as collaborative research done with the support of a CC. While other types of collaborative science may use similar facilitation techniques or experience similar challenges, Coordinated Collaborative Science concentrates much of that facilitation work in the CC itself and, thus, represents a unique perspective on facilitation.
A CC is generally formed to support a specific project, such as a consortium tackling a problem that can only be addressed by employing a networked structure. Seminara et al. (2007) define networks in epidemiology as "groups of scientists from multiple institutions who cooperate in research efforts involving, but not limited to, the conduct, analysis, and synthesis of information from multiple population studies" (p. 1). Such networks can be built and/or funded in a variety of ways; however, in Coordinated Collaborative Science, the research centers and the CC are generally funded as individual components of the network by separate Requests for Application (RFAs) or, occasionally, by contracts. The CC does not usually have an official pre-existing connection to any of the research centers.
We know very little about either how such networks function or how best to facilitate them. In fact, there is no definition of what facilitation means in the context of Coordinated Collaborative Science. CCs receive very little guidance as to how to go about their tasks beyond the vague, high-level expectations laid out in the funding agency's RFA. Few CCs write about their work, leaving new CC PIs and managers to devise their practices anew without evidence of efficiency or efficacy. NIH spends millions of dollars each year supporting such networks and their CCs, yet little research has been done on how the CCs work, how to structure these CCs, or precisely which aspects of the research project should be allocated to the CC. This research presented here seeks to rectify that deficiency by investigating and documenting the work practices of two CCs currently involved in Coordinated Collaborative Science. To that end, we have identified areas of the collaborative process that are enhanced by the work of the CC. The areas on which CC members chose to focus, along with their tools and techniques, are the result of collective decades of experience coordinating multi-site projects. As such, they represent crucial sources of knowledge, which, in turn, could be used to improve the process of collaboration in other networked-science projects. Though limited by its focus on just two CCs at one institution, this research represents a crucial first step toward defining the work of CCs and what constitutes facilitation in Coordinated Collaborative Science.
What We Know about CCs
In the mid-1970s, the National Heart, Lung, and Blood Institute (NHLBI) began a project called Coordinating Center Models Project (CCMP) in an attempt to better understand CCs in clinical trials (Symposium on Coordinating Clinical Trials, 1978). At that time, clinical trials were still a fairly new method of doing research and large amounts of money were being spent to coordinate those trials. Yet very little was known about what made a good CC or how to run a CC most effectively. To address these issues, a CCMP research team was designated, made up of scientists who were interested in the design and implementation of clinical trials. Their approach consisted of a survey of those involved in six NHLBI-funded clinical trials, as well as interviews with key staff members. The results were reported at a conference in 1978 and published soon after (Symposium on Coordinating Clinical Trials, 1978).
One of the key findings of the CCMP was that it was not possible to identify a common set of activities across the CCs (Symposium on Coordinating Clinical Trials, 1978). The research group concluded that there was no one model of a CC. They apparently did not consider the possibility that the great variation in activities and attitudes stemmed from the fact that CCs represented a new organizational model with no existing blueprint and that CC leaders were creating policies and procedures in reaction to the events around them. Perhaps the variation could be traced to the lack of standards both for running a CC and for communicating among CC leaders.
Soon after the CCMP report was published, investigators from several clinical trials published articles about their CCs. These were not empirical studies but, rather, reports written by the CC and clinical-trial leadership detailing how their own CC worked, including a list of the activities for which the CC was responsible, as well as assessments of issues or problems and particularly interesting solutions that were devised for working in a clinical trial. Although the articles described vastly different levels of detail about what a CC should do, all stressed that the primary responsibility was to ensure the quality of the science. Blumenstein, James, Lind and Mitchell (1995) stated that the CC's primary mission is "to assure the validity of study findings that eventually will be disseminated in publications and public presentation" (p. 4). Going into slightly more detail, Mowery and Williams (1979) wrote that monitoring the implementation and adherence to protocol are the primary responsibility of the CC. Rifkind (1980) added delivery of results to the community in a timely and high-quality manner.
The specific responsibilities listed by these authors vary widely, ranging in level of detail from "statistical and content methodological support" (Bangdiwala, de Paula, Ramiro, & Munoz, 2003, p. 61) to "ordering study medications" (Meinert, Heinz, & Forman, 1983, p. 356). Some articles divided responsibilities into categories, most of which are common in theme, if not in a specific label. These categories include: (1) statistical coordination and management; (2) study coordination; and (3) administrative and secretarial support. The first category of responsibilities involves data, including data management and analysis, monitoring data collection, and performing quality assurance (see, for example: Blumenstein et al., 1995; Bangdiwala, et al., 2003; Meinert et al., 1983; Curb et al., 1983; Margitic, Morgan, Sager, & Furberg, 1995; Greene, Hart, & Wagner, 2005; Lachin, 1980; Berge, 1980; and Winget et al., 2005). The second category involves coordinating studies, including developing protocols and forms, monitoring adherence to the protocol or performance monitoring, developing computer systems, training staff, documenting and archiving of study information, communications, adhering to institutional policies, reporting, allocating CC resources, and preparing manuscripts. Administrative and secretarial support included functions such as fiscal management, meeting and site visit organization, budget preparation and management, securing equipment rentals, and personnel management, as well as general secretarial support (Bangdiwala, et al., 2003; Meinert et al., 1983; Curb et al., 1983). These last two categories were sometimes conflated into one, but the described duties were consistent.
One overarching theme raised in some of the papers is the difficulty of staffing a CC. CCs are expected to have on-staff expertise in a wide range of activities, including administration, statistics, federal regulations, human subjects, technology, and organizational development. At the same time, the CC's organizational structure is expected to evolve over the course of the project in response to changes in the work, while minimizing costs. At a workshop at the CCMP kickoff in 1977, the group reported:
One major managerial problem has to do with the establishment of a large, well-trained staff and whether personnel should be retained or transferred out once a study is terminated. Many university-based coordinating centers are locked into the cycle of maintaining these staff positions and have invested much time and effort in staff training in order to fulfill their function. Frequently the only way personnel can be retained is to proceed directly into another study. Since this option is not always available, there is a clear danger in creating too large a coordinating...