Responsible Conduct of Research (RCR) education and training too often emphasize rules like "Do not falsify data" or "Do not plagiarize." These are simple extrapolations of what most researchers learned in kindergarten: lying and stealing are wrong. Reminding researchers of such rules involves stating the obvious, with the result that RCR education and training may be perceived as boring, unnecessary, and ineffective.
However, not all issues in research ethics are so clear-cut. In a survey by Martinson, Anderson, & de Vries (2005) of over 1,700 researchers, 33% reported engaging in so-called "questionable research practices" such as dropping data points from analyses based on a hunch or inappropriately assigning authorship. The example of inappropriate authorship is particularly instructive. First, practices for assigning authorship vary across disciplines (Steneck, 2004). Second, even in a discipline such as medicine, in which international standards have been published (International Committee of Medical Journal Editors, 2007), authorship assignment has not become standardized. A recent review of 234 biomedical journals found that 41% gave no guidance about authorship and only 19% were based on the current criteria of the International Committee of Medical Journal Editors (Wager, 2007). Uncertainty about criteria helps to explain the high rates at which researchers admit to assigning authorship in a questionable manner. Yet, given a lack of standardized criteria within professions, even RCR instructors are uncertain what should be taught in the area of authorship.
While rates of strict research misconduct (data falsification, fabrication, or plagiarism) are much lower than rates of questionable practices, they are also higher than many might assume. A survey by the U.S. Office of Research Integrity (ORI) of researchers holding funding from the National Institutes of Health (NIH) at 605 different institutions, inquired into the number of times researchers had observed suspected research misconduct in their own departments over the previous three academic years (Titus, Wells, & Rhoades, 2008). A total of 2,212 researchers completed the survey (yielding a 51% response rate); they reported observing a total of 201 instances. By extrapolating this rate of observed suspected misconduct--assuming that the 49% who did not respond observed no instances of misconduct--the authors estimated that there are more than 2,300 observations of likely misconduct per year in research funded by the U.S. Department of Health and Human Services (DHHS).
Given that ORI receives an average of only 24 institutional investigation reports per year (approximately 1% of the estimated incidences observed), these numbers suggest the need for RCR education and training--not only to reduce rates of misconduct, but also to provide guidance to researchers in how to respond to observed misconduct. Yet this topic is also controversial. Real-world decisions regarding whistle-blowing are often far more complex (Smith, 2006) and their consequences far more devastating (Couzin, 2006) than ethics textbooks suggest. While it is not sufficient for RCR instructors to remind people of a duty to report misconduct, it is unclear precisely what content or standards should be taught.
In 2000, ORI identified nine cores areas that RCR courses should address: (1) data acquisition, management, sharing, and ownership; (2) mentor/trainee responsibilities; (3) publication practices and responsible authorship; (4) peer review; (5) collaborative science; (6) human subjects; (7) research involving animals; (8) research misconduct; and (9) conflict of interest and commitment. While these core areas provide a useful initial framework, there is no evidence of professional consensus that ORI's list includes the most important areas of RCR, nor what content should be taught and assessed within the core areas (Steneck & Bulger, 2007). For example, Pimple (2002) has recommended approaching RCR through the lens of six domains, some of which overlap with the nine core areas, and some of which extend into new areas such as social responsibilities (including fiscal responsibilities, advocacy by researchers, and environmental impact).
RCR trainers may also have different goals in mind: to convey knowledge of right and wrong; to foster professional virtues; to inculcate values that support good science, to raise awareness of ethical issues; to motivate people to do what is right; and--most ambitiously--to improve behavior (DuBois, Ciesla, & Voss, 2001). The behavioral goal is probably the most widely proffered--even if controversial--insofar as ethics instructors frequently begin courses, textbooks, or funding proposals by citing instances of scientific misbehavior, thus implying that RCR training can help prevent such events. In this vein, one leading research administrator writes, "the value of ... RCR education from an administrative perspective can be summed up in the oft-used adage, an ounce of prevention is worth of pound of cure" (Vasgird, 2007, p. 835).
Two studies examined the content and goals of RCR education and training. In 2005, Heitman and Bulger published a content analysis of 20 RCR textbooks. Content reflected each of ORI's core areas and became more comprehensive after ORI published its policy on RCR instruction in 2000. The authors also identified gaps in the core areas of compliance, ethics of lab safety, institutional responsibilities, and the role of scientists in society (Heitman & Bulger, 2005). Kalichman and Plemmons (2007) studied the goals of existing education and training programs. They conducted interviews with 50 instructors and identified over 50 distinct goals pertaining to knowledge, skills, attitudes and behavior. They found that actual educational goals varied widely across instructors.
These two studies reinforce the need to pursue consensus on RCR instruction. On the one hand, important gaps appear to exist in RCR textbooks (e.g., institutional responsibilities and the role of scientists in society), while on the other, the study of actual education and training programs identified over 50 distinct educational goals, which varied widely across instructors. Add this to the vagaries surrounding authorship and whistle-blowing, and a muddy picture of the goals and content of RCR instruction emerges.
Whereas these previous studies examined the goals or content of existing RCR education and training programs and materials, our project sought to establish a consensus among experts on what RCR education and training should look like. We addressed four specific questions:
What should be the overarching goals of RCR training (e.g., knowledge, problem-solving skills, or virtue)?
Are the nine core areas of RCR instruction identified by ORI complete, or should additional core areas be addressed?
Within the core areas, what specific content should be taught?
What objectives and content should be assessed?
Methods: Delphi Expert Panels
About Delphi Consensus Panels
One way of developing recommendations for a field is to convene a diverse panel of experts to engage significant questions. Such an approach is regularly used by the U.S. National Academies of Science to address questions in the fields of engineering, medicine, and science. With funding from OR[ we used an online Delphi panel process to foster an expert consensus. Delphi panels involve administering a questionnaire to groups of individuals across several rounds with the aim of identifying shared evaluations or recommendations (Ferguson, 2000). Key elements of the Delphi process are a structured flow of information, controlled feedback to participants, statistical analysis of responses, and participant anonymity. Interactions among panel members are controlled by a coordinator, who filters feedback and organizes data for subsequent presentation in the next round. The Delphi method maximizes the benefits of group decision-making while the anonymity of the process minimizes limitations such as domineering group members, personality conflicts, or groupthink (Delbecq, Van de Ven, & Gustafson, 1975). Other advantages to an online Delphi method include its relatively inexpensive cost and convenience for participants, who can access the survey at any time of day.
Delphi Panel Procedures
Because few people possess expertise in all areas of RCR, we formed four separate expert panels. Each panel worked independently and simultaneously. Our Delphi process involved multiple rounds of questioning. Round 1 consisted of an open-response format. Panelists were directed to one of four websites corresponding to their panel assignment(s), where responses were collected in text-boxes.
Our Objectives panelists were asked: (1) What should be the overarching educational objectives of RCR instruction; and (2) Are the nine core areas of RCR instruction complete, or should new core areas be addressed within RCR instruction?
Scientific Data panelists were asked: Within RCR instructional programs, what specific topics should be taught and assessed in the areas of: (1) Data acquisition, management, sharing and ownership; and (2) Research misconduct?
Scientific Relationships panelists were asked: Within RCR instructional programs, what specific topics should be taught and assessed in the core areas of." (1) Mentor/trainee responsibilities; (2) Collaborative science; and (3) Conflicts of interest and commitment?
Scientific Publications panelists were asked: Within RCR instructional programs, what specific topics should be taught and assessed in the core areas off (1) Publication practices and responsible authorship; and (2) Peer review?
We excluded from our project two of ORI's nine core areas for RCR instruction: human subjects and animals. There were several reasons for this: (1) Institutional Review Boards and Institutional Animal Care and Use Committees typically mandate ethics...